Arquivo da categoria: Uncategorized

Entrevista com Pablo Ortellado (Desentorpecendo a razão)

SONY DSC

Por Coletivo DAR e Desinformémonos, 10/09/2013

Com trajetória de ativismo primeiro no movimento punk e depois nas lutas antiglobalização da virada dos 1990 para os 2000, Pablo Ortellado hoje é professor de gestão de políticas públicas na USP Leste. Referência para os ativistas do movimento autônomo e autor de “Estamos vencendo: resistência global no Brasil”, Ortellado lançará nos próximos dias o livro “20 centavos: a luta contra o aumento”, escrito em parceria com Elena Judensnaider, Luciana Lima e Marcelo Pomar, e foi exatamente para entender a conjuntura das lutas sociais após o explosivo mês de junho, tema do livro, que ele encontrou um espaço em sua reta final de escritura para conversar com o Coletivo DAR e oDesinformémonos, que nessa postagem em comum esperam contribuir para a reflexão em torno do que compartilham, a busca pela transformação e pela autonomia.

Na entrevista, Ortellado traça um rico panorama do que foi o movimento autônomo em São Paulo e no Brasil desde suas origens, reflexão que ganha ainda mais importância em um momento em que todos buscam aprender e apreender as lições de horizontalidade e ação direta trazidas pelas mobilizações massivas ocorridas pelo país. Ele comenta as raízes zapatistas do movimento e como este desde seu princípio foi identificado com tecnologias que em verdade foram criadas em seu interior, e situa a importância de iniciativas como o Centro de Mídia Independente (CMI) no processo.

Além de imprescindíveis reflexões sobre o que o MPL tem para ensinar aos ativistas anticapitalistas, sobrou tempo também para uma crítica ao Fora do Eixo, definido por Ortellado não como uma experiência de política alternativa, mas como uma organização “hipercapitalista”.

Quando começou seu ativismo?

Minha militância começou no movimento punk dos anos 1980. É a geração dos punks que se politizaram, no finalzinho dos anos 1980. Deve ser a segunda ou a terceira geração do punk em São Paulo. Tem aquela do final dos 1970, que a gente chamava de “geração 82”, que eram uns caras mais velhos. E a nossa vai até 1988, 1989, é uma geração que se politizou, que fez o encontro da contracultura com a ação política.

Foi a geração que encontrou os sindicatos. Em 1987 e 1988 a gente tentava organizar o sindicato dos office-boys – eu era office-boy – junto com um processo de refundação da Confederação Operária Brasileira, que era a Confederação Operária Anarquista dos anos 1910.

E teve o processo de encontrar os velhinhos, os “anarco-nônos” como a gente chamava, no Centro de Cultural Social nos anos 1980. Foi muito importante para a minha geração, e acho que para eles também foi. Reencontrar com o anarquismo, surgindo de um jeito muito diferente. Foi um encontro importante também porque foi o encontro de duas gerações militantes. Eles já não eram a geração da greve de 1917, eram a geração derrotada, dos anos 1930. O Centro de Cultura Social foi uma estratégia do movimento operário derrotado pelos comunistas e depois pelo Estado Novo, eles optam por essa estratégia cultural de fundações de cultura social. E eles tem ainda, eles guardam a historia das lutas sociais dos anos 1910 e 1920, receberam essa vivencia dos mais velhos, mas sao uma geração da derrota, que foi mantendo vivo o legado do anarquismo em São Paulo durante quase todo o século 20.

Acho que para eles, deve ter sido muito vivo ver aquele bando de adolescentes nos anos 1980 cheios de interesse pelo anarquismo. E foi um movimento interessante de troca, no qual a gente aprendeu bastante com eles a respeito de toda essa história de lutas e eles também. Por incrível que pareça, embora a gente viesse da contracultura, um outro mundo do mundo dos sindicatos anarquistas dos anos 1930, houve muitos contatos. Inclusive nos elementos contraculturais. Por exemplo, o Centro de Cultura Social só tem uma propriedade, que é um sítio naturista. Então eles eram naturistas, eram vegetarianos, eram adeptos do amor livre. Então vários elementos da contracultura que a gente carregava ainda naqueles anos – porque o movimento punk em São Paulo tinha algumas características bem particulares, eram meio incipientes, meio confusos, a gente encontrou apoio e espelhamento na experiência histórica dos velhinhos e houve muita troca.

Houve casos super interessantes já nos anos 1990, dos velhinhos se tornarem veganos, que era uma coisa totalmente da nossa geração, a partir do intercâmbio com os punks. Tinha algumas coisas bem peculiares, a minha geração era uma que não usava drogas. Eu nunca fumei maconha. E fui beber depois dos 25 anos. Isso não tem nada a ver com os straight-edges, era outra coisa. Tem a ver com a cultura da disciplina militante do anarquismo dos anos 1930, uma coisa que a gente, do meu grupo pelo menos, da Santa Cecília, que frequentava o Centro de Cultura Social, incorporou. A gente não tolerava pessoas que bebiam e fumavam, porque “era coisa de gente alienada”. Uma percepção totalmente em desacordo com a cultura punk que é punk de esgoto que bebe pinga.

E já tinha isso.

Ah sim, claro, era a cultura dominante. A nossa geração, a geração dos punks que se politizaram, tinha ojeriza a isso, achava que era coisa de playboy. Eu não sei, acho que gerações posteriores não tiveram isso, mas essa que teve esse contato, incorporou elementos que não eram nossos, eram elementos dos velhinhos. Embora tivesse esse contato com a contracultura, que a gente acha que a contracultura é uma invenção dos anos 1960, dos hippies, mas a gente via nas discussões do movimento operário vários debates sobre feminismo, sobre natureza, sobre vegetarianismo, todos esses elementos que a gente considerava contraculturais, muito presentes.

Eles tratavam com muita tranquilidade, embora de um jeito totalmente diferente. Porque nosso discurso era anti-disciplinar, contra as instituições, contra as regras. O deles não, era auto-disciplinar, era a ideia de praticar o amor livre por não precisar do Estado ou da Igreja ditando para a gente como é, tipo “nós fazemos nossas regras”, diferente do “rejeitamos as regras”. Tinha contatos e diferenças e minha geração é desse intercâmbio, final dos anos 1980.

Como foram os anos 1990?

Acompanhei pouco, porque fui pros Estados Unidos, militei no hardcore, fiz outras coisas e só retomei no fim dos anos 1990, então tenho um hiato na história de São Paulo. Já no final dos anos 1990 surge o movimento antiglobalização que é a confluência de outras coisas. Tinha essa cultura anarquista ligada à contracultura, ao hardcore, ao punk; tinha uma tradição libertária no movimento estudantil, principalmente na FFLCH; e tinha esse contexto mundial de resistência ao neoliberalismo.

No final dos anos 1990 vários de nós estávamos acompanhando as listas das mobilizações, estava acontecendo bastante coisa na América Latina. Teve a greve da UNAM, depois teve o 501 na Argentina. O 501 foi um movimento muito importante, foi o pessoal que depois viria para o movimento antiglobalização. Surgiu contra o processo eleitoral e tem uma lei que o voto é obrigatório se você tiver a até 500 km do seu domicílio eleitoral. Então o movimento era uma caravana, com vários grupos ativistas, que levava você até 501 km.

Isso marcou muito a esquerda argentina porque era uma iniciativa dos jovens não anarquistas, era o que viria a ser o autonomismo argentino. Eles organizaram assembleias cujo lema era “Existe política além do voto”. Então a gente estava muito inspirado por essas coisas. Campanhas de apoio aos zapatistas, a greve na UNAM, o movimento 501 eram mais ou menos o panorama latino-americano.

E aí em 1997, no Segundo Encontro Intergaláctico pela Humanidade e Contra o Neoliberalismo, dos zapatistas, surgiu a ideia de fundar a Ação Global dos Povos (AGP), que era confederar os movimentos sociais de base voltados para a ação direta, para organizar globalmente uma oposição ao neoliberalismo.

No Brasil, como era a AGP?

No Brasil, a gente começou, era isso: a contracultura do hardcore e do punk, o movimento estudantil principalmente da USP não ligado a partidos e pequenos coletivos, pequenos coletivos feministas, pequenos coletivos ambientalistas… Esse era o caldo. A gente começou a se reunir, entrando para valer, no ano 2000.

Já tinha uma cultura de internet no movimento?

Sim, era totalmente cultura de internet. Uma das coisas totalmente distintivas do movimento antiglobalização em relação a movimentos anteriores dos quais ele é filiado é que ele era totalmente organizado globalmente.

Antes as coisas iam, se espelhavam, uma luta influenciava a outra, mas não havia uma organização de fato de as lutas se corresponderem. Com a internet isso mudou completamente, esse movimento foi completamente articulado.

Por exemplo, a gente sofreu repressão no A20 em 20 de abril de 2001. Aí ocuparam a embaixada brasileira em Amsterdã, em Roma e assim, completamente articulado, foi a partir dos relatos que a gente mandou para os companheiros. E vice versa: quando aconteceu a morte do Carlo Giuliani em 2001 a gente ocupou o consulado da Itália, e não era uma coisa espontânea, era uma coisa de articulação da rede de solidariedade. A gente ensaiou essa possibilidade de organização horizontal num nível internacional.

A grande inspiração você diria que foram os zapatistas?

Com certeza. A ideia da AGP nasceu num encontro zapatista, em Barcelona. Nasceu a ideia e o primeiro encontro fundador da AGP foi em Genebra em 1998.

Quais os princípios que ligavam essas pessoas e coletivos?

Todos exatos eu não lembro de cabeça, mas eram os princípios da autonomia, da horizontalidade, a ideia de não ser uma organização. A AGP não era uma organização, era uma espécie de rede de solidariedade e luta. A ideia da diversidade de estratégias de luta, de não termos uma linha única que fosse imposta, de rejeição dos modelos já estabelecidos de luta, e uma crítica muito forte a todas as formas de opressão. O que não era algo necessariamente novo, mas levávamos muito sério. Incorporamos essas lutas do feminismo, do movimento negro, de forma muito forte.

Na verdade a gente via o processo de globalização como uma oportunidade para federar as lutas que tinham se fragmentado nos anos 1960, era nossa leitura. Antes dos anos 1960 era o movimento operário que conduzia a luta social, depois se fragmentou no movimento feminista, movimento ecológico, movimento negro, e assim por diante.

E nossa ideia era que o processo de globalização econômica permitia federar essas lutas porque afetava as mulheres que estavam trabalhando num workshop no México, afetava o problema do desmatamento porque suspendia as regulações ambientais para gerar competitividade entre os países, então o movimento ambiental podia se somar, o movimento trabalhista porque suspendia também a proteção ao direitos trabalhistas para flexibilizar a mão de obra, etc.

Quando demos o nome de anticapitalismo no final dos anos 1990, é curioso, tinha uma acepção diferente, porque não era econômica. Era a ideia de que o capitalismo era a soma de todas essas formas de dominação e exploração, e que o anticapitalismo era a federação de todas essas lutas em uma luta comum, a luta contra o neoliberalismo. Foi realmente uma tentativa. Tanto que por exemplo, na nossa rede da AGP aqui teve vários grupos feministas, vários grupos ambientais, alguns sindicatos pequenos, no Ceará tinha um pessoal do Movimento dos Trabalhadores Sem Terra (MST), então a gente confederava lutas muito diferentes, mas aqui muito orientadas na luta contra a ALCA [Área de Livre Comércio das Américas].

Como funcionava a AGP a nível mundial, como um coletivo daqui se vinculava, se relacionava?

A AGP não era nada. A AGP era uma ideia, uma carta de princípios. Qualquer um que respeitasse essa carta de princípios levantava a mão. Formalmente, nos encontros, tinha delegações da AGP. Então o primeiro que aconteceu em Genebra em 1998, o delegado brasileiro foi o MST. E o MST se somou às nossas primeiras manifestações, participava. Nossa primeira reunião se eu não me engano foi inclusive na sede da Consulta Popular, foi o MST que deu toda a infraestrutura. Mas aos poucos, digamos, eles participavam, mas não eram os atores mais importantes da AGP aqui.

E depois, o que houve para que a AGP fosse se desarticulando?

Esse ciclo se desgastou. Ficamos muito ativos desde 1998, acredito que o auge tenha sido em 2000, a gente fez o S-26, barrando um encontro do Banco Mundial e do FMI em Praga e acho que foi nossa grande conquista. Fizemos ações em cerca de 300 cidades no mundo e eram totalmente dominadas pelo movimento autônomo, conduzidas por nós, o grande feito político foi termos coordenado protestos em centenas de cidades, muitas milhões de pessoas. Depois tiveram outros grandes, protestos em Gênova, houve vários.

Mas a fórmula era tentar copiar o sucesso de Seattle. A ideia original eram os “carnavais contra o capitalismo”, que quem tinha dado a ideia eram os companheiros de Londres, do “Reclaim the streets”. Eles vinham da confluência do movimento de politização das raves e do movimento anti-estradas, ligado a coletivos ambientais. E faziam essas festas de rua que bloqueavam estradas e tal, e eles que lançaram a ideia de carnavais globais contra o capitalismo. Tinha esse caráter meio contracultural.

Essa ideia foi lançada e a primeira fez que aconteceu globalmente de fato foi no J18, 18 de junho de 1998, em dezenas de cidades. Aí em seguida teve Seattle, que foi 30 de novembro de 1999, e depois 26 de setembro de 2000. E em 2000 a gente já estava completamente articulado globalmente, foi quando a coisa atingiu centenas de cidades.

Em Seattle tinha dado muito certo, porque conseguiram barrar fisicamente a reunião da OMC [Organização Mundial do Comércio], da rodada do milênio. A estratégia era pegar um mapa, o lugar do encontro, barrar todos os acessos por meio de bloqueio de ruas. Atrasou os delegados, os sindicatos estavam fazendo uma megamanifestação, o Clinton estava na parede porque tinha eleição próxima, isso gerou um caos. E a rodada do milênio que era um projeto extremamente ambiciosa de desregulamentação econômica em âmbito mundial, falhou miseravelmente.

Isso virou um espécie de paradigma do movimento antiglobalização: fazemos grandes manifestações tentando bloquear ou invadir os eventos, e centenas de protestos pelo mundo para aumentar a pressão. Fizemos isso inclusive aqui, teve o encontro do BID [Banco Interamericano de Desenvolvimento] em 2002 e fizemos protestos em Fortaleza, por exemplo.

Mas esse modelo começou a se esgotar, porque ficávamos correndo atrás, começou um sentimento de que a gente estava girando em falso, que a experiência de Seattle nunca mais ia acontecer. Teve o 11 de setembro que endureceu nos EUA e em outros países a maneira como o Estado combatia esse movimento, ameaçando aplicar leis antiterroristas por um lado e, por outro, a ameaça de uma guerra a nível global fez com que a gente fosse mudando gradativamente para o movimento antiguerra. Acho que essas duas questões foram levando esse modelo para o esgotamento, além dos aspectos internos, a sensação de que não estávamos caminhando para nenhum lugar.

Houve um movimento natural de ir voltando para os coletivos locais. O pessoal da Argentina, por exemplo, foi em peso para o Movimento Piquetero, alguns para o movimento de assembleias. Aqui, teve uma galera que foi para o Movimento Passe Livre (MPL).

Além dos bloqueios e manifestações de solidariedade, quais eram os outros eixos de atuação da AGP?

Aqui, por exemplo, a gente fazia muita campanha pública. Fizemos toda a campanha contra a ALCA, muito antes de entrarem as igrejas, os partidos políticos. A gente ia em escola, sindicato, associação de bairro, produzimos centenas de panfletos, jornais, fizemos uma campanha bem estruturada, de informação. Explicar o que era a ALCA, quais seus impactos, o que significava para as relações trabalhistas, para o meio ambiente, e assim por diante. Isso articulado com os protestos. Acho que adquirimos um nível bom de organização, e tinha bastante gente simpática ao movimento, financiavam nossas publicações.

Embora eu ache que para a coisa ter vingado tenha sido importante a participação dos partidos políticos, da igreja católica, o plebiscito que foi feito sobre a ALCA, isso ajudou a enterrar a ALCA, já por volta de 2003.

Quais as características e especificidades dos movimentos que se seguiram a esse processo, que hoje se convencionou chamar de movimentos autônomos?

A ideia de coletivos autônomos foi sendo construída muito no movimento antiglobalização. Antes a gente tinha uma cena anarquista. Isso tem a ver com a contracultura brasileira, que não teve a interface com a política. Foram dois caminhos, numa trilha os tropicalistas, os hippies, na outra, a luta armada contra a ditadura militar, a crítica ao stalinismo. Essas coisas não se encontravam, como aconteceu, por exemplos nos Estados Unidos, na Itália.

Essa fusão só aconteceu aqui com os punks. Que o punk antes era despolitizado, se associava o que a gente chama de punk 82 com algo mais social que político, era a classe trabalhadora gritando com uma guitarra. Um grito confuso, um grito sem experiência política, mas gritando. Um grito de revolta contra a pobreza, a exclusão, misturado com algo contracultural, existencialista, tudo junto e muito confuso.

Essa coisa do anarquismo é do fim da segunda e da terceira geração do punk, que se aproximou mais da tradição sindical do anarquismo brasileiro. Tem um fato que é muito marcante, em 1988 tem um 1 de maio que fazemos a segurança da manifestação, junto com a CUT [Central Única dos Trabalhadores]. Era a COB, a Confederação Operária Brasileira que a gente estava tentando refundar, junto com a CUT e outras centrais. Nesse momento estava começando a se definir entre os punks um grupo anarquista, que lia Proudhon, Bakunin, frequentava o Centro de Cultura Social, era uma novidade.

Outros grupos de punks, misturados com skinheads, que estavam caminhando para a direita, essa cisão só aconteceu no final dos anos 1980, e eles vão para esse 1 de maio para bater, e tomam uma surra. Menos dos punks porque a gente era muito miudinho, mais dos sindicalistas. Tomam um pau. Acho uma marca dessa cisão ideológica da contracultura brasileira. Para mim a fusão entre contracultura e política se concretizou nesse dia. Tem a direita e tem a esquerda. Os skinheads começaram a flertar com o integralismo e viraram fascistas e outro grupo eram os anarquistas.

No movimento antiglobalização, tinha, por um lado, a rejeição aos partidos políticos, à hierarquia, tinha discurso libertário mas tinha uma visão mais ampla. A gente dialogava com as ONGs, com movimentos mais tradicionais na sua forma de organização, começou a haver certa diferenciação com um anarquismo mais programático. Começou a surgir essa ideia do movimento autônomo, também sob inspiração da teoria político mais autonomista, seja na tradição francesa como Castoriadis, seja na italiana, como Negri ou Mario Tronti. Algumas pessoas começaram a ler, e foi se desenvolvendo esse entendimento, de se juntar também aos marxistas dissidentes mais horizontalistas, que nunca tinham sido bem vindos no movimento anarquista por razões históricas. Autonomia em relação ao Estado e em relação ao mercado. Isso é uma construção do final dos anos 1990, início dos 2000.

E o Fórum Social Mundial?

O Fórum Social Mundial foi uma construção totalmente separada da nossa. Tinha o Fórum Econômico Mundial de Davos, que juntava representantes das empresas, dos governos, do setor acadêmico e fazia um grande encontro de cúpula dos líderes mundiais para discutir assuntos de interesse global. O Fórum Social Mundial foi pensado exatamente nos mesmos termos originalmente. Era para ser um encontro das lideranças dos sindicatos, das lideranças dos movimentos sociais e das lideranças dos partidos políticos, numa grande conferência contra-hegemônica.

Nesse momento já tinha o movimento antiglobalização, um movimento de base horizontalista, e já tinha sua importância. As primeiras propostas do Fórum eram de um encontro de cúpula que as pessoas inscreviam lideranças. Aí abriram um negócio chamado oficinas que permitia a participação de todos. E as oficinas bombaram, porque o movimento era horizontal. E subverteu.

Apesar de não surgir da nossa tradição, os organizadores do Fórum souberam incorporar, digamos, esse caráter horizontalista e deixar o encontro virar outra coisa. Ficou mais participativo, mais rico. A gente nunca participou ativamente, da construção, mas aproveitava que vinha gente do mundo inteiro, e sempre fizemos, ao menos até 2005, encontros paralelos.

Você vê, então, o MPL como uma espécie de continuidade desse processo?

Totalmente. O MPL tem duas origens: o movimento antiglobalização e outra que vem ideologicamente do trotskismo, mas que vem talvez mais que isso, do movimento estudantil pelo passe livre. Nos anos 1990 tem a luta forte pelo passe livre estudantil que traz como inspiração as conquistas no Rio de Janeiro, então tem uma tradição estudantil forte.

Em Florianópolis, por conta do Juventude Revolução Independente, começaram a defender essa ideia da autonomia a partir do trotskismo. A partir de uma leitura e reflexão interna dentro do trotskismo, atuando com o conceito de autonomia nesse campo da luta pelo passe livre estudantil. Aí nasce dessa experiência de Floripa, por um lado da Juventude Revolução Independente por sua vez ligada ao processo do passe livre estudantil dos anos 1990, e por outro,  o movimento antiglobalização.

Vários militantes do movimento antiglobalização compuseram o MPL original de Floripa. Quando a gente faz o primeiro encontro do MPL lá no Fórum Social Mundial de 2005, estão presentes muitos do movimento antiglobalização e principalmente do CMI [Centro de Mídia Independente], que é, digamos, a face mais organizada do movimento.

O CMI era praticamente a expressão midiática do movimento antiglobalização. E o CMI era organizado, tinham vários grupos locais que se reuniam. Sempre funcionou como uma espécie de esqueleto da AGP, mais claramente organizado porque tinha coletivos, endereços, comunicação global, um site de referencia. Foi muito importante para o movimento antiglobalização como um todo. E serviu como meio de difusão do MPL. Tanto é que acho que quase todos os primeiros MPLs em 2005 vieram de coletivos do CMI.

Chegando nos tempos atuais, como você vê esses coletivos e movimentos autônomos que estiveram envolvidos na onda de mobilizações que começou em junho? E também as novas articulações entre esses movimentos que essa jornada de lutas tem impulsionado?

Acho que essa experiência da luta contra o aumento da tarifa trouxe um salto qualitativo muito importante. Essa tradição de luta que a gente remonta ao zapatismo, ao movimento antiglobalização, mais recentemente ao Occupy Wall Street, ao 15M, e lá atrás a maio de 1968, às lutas da autonomia italiana, essa tradição é muito marcada pela valorização do processo de organização.

Ou seja, a ideia de que devemos fazer política pré-figurativa. Que a forma de organização do movimento deve espelhar a sociedade que a gente quer. Então ser horizontal, inclusivo, não ser sexista, não ser racista, um enorme cuidado com o processo. É processo político e também criativo – então fazer intervenções divertidas, contraculturais, é a mesma valorização do processo: queremos uma vida prazerosa, desburocratizada das amarras institucionais.

Eu faço uma avaliação crítica que essa característica fez com que historicamente a gente fosse muito desatento aos resultados da luta. Várias experiências dessa se perderam por serem incapazes de ter um foco claro de luta. No movimento antiglobalização, era um esforço enorme a gente converter a luta em uma luta objetiva contra a ALCA. Vamos pressionar o governo brasileiro a não assinar a ALCA. Fazer isso era um esforço, porque a tendência do movimento era ser algo auto expressivo, carnaval contra o capitalismo. Uma explosão de rebelião antissistêmica.

O que estava valorizado aí? O processo, a forma de luta. Um dia, quando a gente vencer, que a gente não sabe como, vai ser por meio do fortalecimento da luta, que vai ser horizontal, participativa, comunitária. Mas isso fazia com que o movimento não tivesse um objetivo de curto prazo ligado a esse objetivo de longo prazo. Não tinha estratégia para vencer. Por sorte, em Seattle funcionou e criou um paradigma de estratégia para vencer: a gente barra. Foi tentado em vários lugares, nunca mais aconteceu.

Mas se olhar a experiência do 15M, do Occupy Wall Street e das ocupas pelo mundo todo, também daqui como o Ocupa Sampa, essa incapacidade de ter um foco apareceu. Isso aconteceu desde o final dos anos 1960 se pegarmos gênese do movimento.

E eu acho que a grande novidade aqui é que o MPL criou um novo paradigma. Ter um objetivo de curto prazo, que é um processo de uma utopia, de uma transformação mais profunda. Qual a transformação mais profunda? A desmercantilização do transporte. Direitos público à mobilidade urbana. Mas isso se concretiza num passo: a tarifa voltar para trás. É totalmente contra-intuitivo, do jeito que a coisa é, a tarifa sempre cresce. A partir do momento que ela volta para trás, você coloca no horizonte a possibilidade de voltar para trás até seu limite, que é o zero.

E isso, cara, parece selvagem, há dois meses atrás diriam que é coisa de extremista, de gente delirante, e eles mostraram isso que agora está no centro da pauta: não tem um partido, um veículo de comunicação ou um político que não estejam discutindo essa pauta que há dois meses era chamada de delirante, sem pé na realidade.

Eu acho que eles conseguiram criar isso. E era factível, apesar de bem difícil. Foi suado, custou bastante trabalho, muita gente foi presa, muita gente apanhou, teve gente que morreu, mas era factível, foi factível.  E isso ampliou os horizontes.

Mas tem mais do que isso aí, não? Porque já teve outros aumentos que foram barrados…

Já teve, já teve.  Na verdade eu estou falando do MPL como um todo…

Ah, você acha que já nas primeiras vitórias isso está contido?

Já contém isso, sim. Eu acho que o MPL daqui simplesmente deu muita visibilidade por estarmos em São Paulo, mas o MPL é isso, ele nasce do aprendizado da Revolta do Buzú de Salvador. A Revolta do Buzú foi um movimento espontâneo, de jovens – molecada mesmo, adolescentes, até pré-adolescentes – que saíram nas ruas e bloqueram a cidade durante vários dias contra o aumento das passagens e foram traídos pela UNE.

Eles não tinham um instrumento político. O MPL é a busca por aprender com esse erro, aprender com o processo espontâneo. Quem inventou, quem exemplificou essa estratégia de luta foram os meninos de Salvador, só que teve uma falha, já que não havia com quem negociar.  E eles fracassaram, perderam bem perdido com a traição da UNE. E aí a ideia do MPL é dar um estamento político pra essa luta, e fomentar essas revoltas que tinham nascido espontaneamente.  Um grupo político vai fomentar uma revolta.

E aconteceu em Floripa duas vezes, 2004 e 2005. Depois aconteceu em várias cidades, deve ter tido mais de dez revoltas de transporte entre 2004 e agora.  Talvez mais de vinte.  E as instituições políticas foram surdas a esse processo. E os meninos do MPL de São Paulo continuaram, “um dia vai virar aqui, tem que virar, ta virando em todo lugar”.  Eles apostaram, continuaram insistindo na luta e tiveram vários acertos estratégicos, amadureceram estrategicamente, começaram a pensar no curto prazo, em como fazer para pressionar, houve uma maturidade no jeito político de atuar que eu acho um aprendizado  para o movimento autônomo não só do Brasil como do mundo.

Sem brincadeira: Ocuppy WallStreet tinha muito a aprender com o que os meninos do MPL fizeram, se eles tivessem os vinte centavos deles pro sistema financeiro as coisas teriam sido muito diferentes, podia ter tido uma vitória. E que não é só uma pequena reforma,  o passe livre é uma pequena reforma que aponta imediatamente pra sua própria natureza de uma profunda transformação do sistema.  Liga-se com a desmercantilização do transporte, e isso abre o precedente para várias outras desmercantilizações, é como um novelo de lã que você começa a puxar e vai ampliando os horizontes.

Conseguiram colocar uma meta de curto prazo, exequível e intrinsicamente ligada ao processo de transformação da sociedade que a gente quer.  E ao invés de valorizar apenas o processo de luta, valoriza-se também o processo de luta, porque é um movimento autônomo, que faz essas discussões de que o processo tem que ser horizontal, independente dos partidos e etc., mas não descuida da conquista de objetivos práticos de curto prazo que vão acelerar essa passagem pro objetivo de longo prazo.

Desde esse começo da Internet que você fala desenvolve-se também um saber relativo a organização de movimentos pelas redes, mas isso talvez tenha explodido no senso comum agora, não? É de agora que vem mais à tona esse discurso de que as mobilizações são produto das redes sociais e da Internet?

Então, essa conversa não é nova. O movimento antiglobalização foi muito rotulado como o movimento da Internet.  E eu acho essa uma leitura tecno-determinista muito equivocada, porque a história é o inverso do que é contada. A história que a gente escuta é a de que as redes  são horizontais, democráticas e participativas e o movimento das ruas copia a forma de organização das redes.  Ou seja, a forma de organização das redes impõe a organização das ruas. E é exatamente o oposto. As redes foram desenhadas por nós pra ter esse formato e não o contrário, ou seja, as redes adquiriram esse formato horizontal e participativo.

A Internet era uma rede universitária até 1995, ela se privatizou, ou seja, foi aberta pra venda e serviço de acesso a Internet em 1995. E quando ela se privatizou o modelo que se tentou fazer é o modelo do portal, que é o mesmo da comunicação tradicional: um emissor e vários receptores.  E foi isso que estava imposto, o modelo da American Online (AOL),  do iG, do Uol… Eu vou ter um portalzão, uma redação com jornalistas que vão abastecê-lo…

Os portais até proviam acesso.

É, vou prover acesso, vou prover informação, serviço de e-mail… é um modelo de um para muitos, o modelo tradicional da comunicação de massa. E era esse modelo que estava sendo implementado nos anos 1990.

O CMI é um entendimento de que a gente devia usar as possibilidades da Internet, que era um veículo bidirecional, em que se falava e recebia, e subverter essa tentativa de transformá-la numa grande televisão ou numa grande revista e fazer uma forma de comunicação interativa, baseada nas experiências das rádios livres, das TVs comunitárias, dos fanzines, nessa tradição de comunicação alternativa. E foi assim que foi desenhado. O CMI era um site de publicação aberta, quando não existia nem blog. Quem inventou o conceito de blog foi o CMI, não tinha blog, as pessoas não faziam isso, elas faziam sites. Uma ideia de um blog,  que seja um negócio fácil de escrever e que possa ser atualizado rapidamente não existia, o CMI é pré-blog, é précreativecommons.

E não é à toa que do CMI saíram muitas das empresas de redes sociais: Twitter, Youtube, Flickr e Craigslist. Todas foram fundadas por pessoas que vieram do CMI.  Foi um duplo movimento, o CMI servindo como exemplo de que se pode fazer comunicação de outro jeito e gente do CMI que quando ele se exaure vai tentar viver de outra forma. Isso tem a ver também com a forma de organização da esquerda liberal americana que permite essas passagens do movimento social pro mercado de uma  maneira que a gente consideraria  bizarra – mas que no contexto americano não é tão bizarra.

Isso aconteceu, principalmente nos EUA e na Inglaterra, vários técnicos do CMI trabalhando nessas empresas e de certa maneira desenhando essas empresas. Ou elas sofreram influência direta, no sentido que de pessoas saíram e desenharam essas tecnologias,  ou por meio da inspiração do modelo de comunicação participativa. E hoje todo mundo faz né, a Globo News tem lá “mande seu vídeo”. Só que a gente inventou em 1999 o “mande seu vídeo”.

Mas isso aconteceu como? Uma conversão ideológica dessas pessoas ou mais uma questão de trabalho?

Eu acho que era mais uma coisa de decisão pessoal de arrumar um trabalho, mas nisso você carrega essa sua bagagem. Quando você vai desenvolver um projeto pra uma empresa, pensa em fazer algo participativo.  Mas é uma história oculta: essa história nunca foi contada porque os atores têm vergonha.  Conheço vários deles, eles têm vergonha porque são pessoas que são militantes até hoje. Do mesmo jeito que eu fui pra universidade eles foram trabalhar em empresas, se eu estava fazendo mestrado e tinha isso como caminho, o do cara que era programador era trabalhar em uma empresa onde pudesse programar pra viver.

Mas seria importante exemplificar pra ficar claro que nós não estamos copiando as redes, e sim foram elas que nos copiaram. Se você olhar hoje pra esse panorama de que hoje toda a comunicação eletrônica é participativa de certa maneiro isso é uma vitória do nosso projeto. E não precisava ser assim, cara, aliás a tendência nos meados dos anos 1990 era a de que a Internet fosse uma grande televisão e a interatividade seria você mudar de canal. Ela foi outra coisa porque houve participação popular e se tentou pegar as formas participativas de comunicação que vinham da comunicação popular e aplicar explorando as potencialidades da Internet.

Por exemplo no livro Mídia Radical, do Joe Downing, ele vai contando a história das rádios livres, das TVs comunitárias, e como tudo isso converge no CMI no final dos anos 1990.  Daria pra fazer um segundo volume do livro dele, mostrando como a partir daí acontece uma revolução nas empresas de tecnologia da informação.

Seu próximo livro é esse então?

Não, não. Mas eu já pedi pra vários amigos, vocês precisam contar essa história, perder a vergonha porque é importante, é uma história oculta, essa ninguém sabe.

E como fica o “terceiro setor” dessa história, que não é propriamente movimento nem mercado, esse negócio estranho aí que é o Fora do Eixo. Seriam também um produto dessa evolução?

Cara, eu acho que o Fora do Eixo é uma coisa totalmente à parte. Acho que o Fora do Eixo não tem nada a ver com essa história, de nenhum dos lados, acho que ele é uma tentativa de positivar a natureza do trabalho contemporâneo. O Fora do Eixo é o grupo político mais impressionante que eu já vi em mais de vinte anos de militância, nunca vi ninguém mais eficiente do que eles, são um fenômeno político impressionante.

Eles são uma organização hipercapitalista. O que eles fizeram: quando a natureza do trabalho virou informacional, você já não consegue mais separar trabalho de não trabalho. Você  trabalha com jornalismo, você não desliga, não é que nem um operário que pendura o macacão e vai pra casa. Você não desliga o cérebro, você tá pensando na pauta, senta pra conversar e está conversando sobre a pauta, aquilo te toma. Por isso que é muito difícil organizar tempo de trabalho neste tipo de trabalho informacional.

Todo mundo que trabalha atrás de uma tela de computador trabalha assim, você não tem como se desligar de um trabalho dessa natureza simbólica. E aí você tem essa mistura de trabalho e não trabalho, que é massacrante.  O que eles fizeram foi transformar isso numa coisa positiva e militante.

E escondem o lucro.

Eu acho que eles não são capitalistas nesse sentido, porque eles não estão atrás do lucro econômico. É curioso, você vai ver o núcleo do Fora do Eixo, os caras que moram na casa, eles vivem que nem estudante.  Moram em beliches, vivem muito pior do que eu. E têm uma conta com três milhões de reais. Não é orientado pro lucro: o que é mais capitalista, e não menos.

Mas se é muito orientado pro poder de Estado, que implica em lucros pessoais, de uma forma é orientado pro lucro, não?

Eu acho que a questão deles é poder, não é dinheiro. Posso estar enganado, estou analisando aqui de fora.  Quer ver o que eu penso? Guardadas as proporções, não estou querendo supervalorizar, eles são superinteressantes mas também não são tudo isso que o pessoal pinta não, mas pega o Webber, a ética protestante. Pega a definição de capitalismo marxista, é geração de valor pra geração de mais-valor, pegar dinheiro, investir e gerar lucro é totalmente pré-capitalista, é desde a Antiguidade.

Mas nessa época o que você faz? Pega e gasta. Na Antiguidade uma das condições de ser rico é que você não trabalhava, você entrava num empreendimento econômico pra não trabalhar, punha as pessoas pra trabalhar e só vivia na riqueza.  E o Webber fala assim: o capitalismo não é isso. Capitalismo é outra coisa, eu pego a riqueza e reinvisto,  gerando um processo de expansão econômica.  Isso é o que diferencia o capitalismo de outros processos econômicos, e ele vai buscar no ascetismo protestante essa lógica.

Os protestantes acumulavam dinheiro e reinvestiam. E a partir do momento que você tem essa lógica de não vou gastar com mulheres, bebidas e na vida de luxo, que é a forma tradicional das pessoas ricas viverem a vida, mas eu vou reinvestir na produção, a partir do momento que você faz isso você obriga todos os competidores do mercado a seguirem a sua mesma lógica. A lógica de trabalho capitalista, de expansão: trabalho, mais trabalho, mais rigor, mais acúmulo de capital.

E quem não entra nessa lógica perde pra você, e é comprado por você. O  capitalismo é uma lógica de expansão, e vai expandindo essa lógica do trabalho pra todas as esferas da vida.  É isso, cara. Então o capitalismo não é acumular dinheiro,  o capitalismo é não acumular dinheiro.  Por isso eu acho o projeto do Fora do Eixo profundamente perigoso, porque é um projeto de vida para o trabalho. Sem acumulação.

A acumulação é de poder?

É acumulação de poder, é um projeto que usa uma estrutura econômica para um projeto de poder oculto. Qual o projeto político do Fora do Eixo? Desconhecido.  Não tem documento público. Mas eles têm projeto político, eles são um partido, obviamente eles são um partido político. Eles não estão nessa por dinheiro, vai lá na casa deles, os caras comem miojo e vivem que nem estudante.  E eles trabalham pra caralho, e bem. Eles deram um choque de capitalismo, de organização capitalista, na cultura alternativa.  Chegaram em São Carlos e falaram “vamo organizar o circuito de bandas de São Carlos”.  Na cultura até a parte capitalista é super desorganizada, a parte alternativa da cultura é caótica. Cara, é o caos. Aí eles chegam com uma ética de trabalho rigorosa, com gente eficaz, e impõem isso.

E fazem isso positivando a distinção entre trabalho e não trabalho, criando uma cultura de que minha vida é o trabalho. Com um discurso ativista. Como se eles estivessem fazendo ativismo, mas eles não estão fazendo ativismo, estão fazendo atividade econômica sem finalidade de lucro, gerando mais acumulação.  É brilhante. E perigoso.

E desse processo de junho aparentemente eles saem favorecidos, afinal depois do MPL quem ganhou mais visibilidade foi a tal Mídia Ninja.

É impressionante. Eles têm um entendimento muito sofisticado da natureza do nosso capitalismo contemporâneo. Eles sabiam que como eles não tinham capacidade de ser um ator relevante, se eles controlassem a comunicação do movimento eles controlam o movimento, controlam a imagem de como o movimento é representado.

O que é o Ninja, cara? É uma coisa minúscula perto do que aconteceu, do fenômeno político que aconteceu. Mas eles são superexpostos, porque eles controlam a comunicação e a comunicação é chave pra maneira como as pessoas percebem o movimento e como o movimento se percebe. Então é estratégico.  E eles fazem de um jeito sofisticado, eles trabalham marca…  O trabalho de marca deles é impressionante, que organização política trabalha marca? Exposição do nome, exposição do logo,  constroem um texto político colocando o logo e o nome, trabalhando com alavancagem de marca pra usar a expressão publicitária.

Já falei várias vezes que a gente deveria aproveitar isso pra gente ganhar maturidade política, porque a gente só vai enfrentar um ator político dessa natureza se tornando muito mais sério no nosso entendimento da luta social, a gente é muito amador. Eles colocam o desafio num outro nível.  São mais eficientes que os capitalistas.

Mas engajam seus membros sob uma perspectiva de militância.

Essa forma é militante mas é despolitizada. Qual a plataforma pública deles? Nenhuma. Fizeram Marcha da Liberdade, em defesa da liberdade. Lutaram pra que mantivesse esse sentido genérico.  Fizeram Existe Amor em SP. É amor, liberdade, vão fazer algo pela paz, porque é uma estratégia de mobilizar sem causa. Curioso, é extremamente despolitizante.

Mas o Existe Amor em SP tinha uma causa, era eleitoral.

Tinha, mas era oculta. Era contra os fascistas, em tese. E depois que saiu o Russomano ficou ainda mais vago. Pela cidade… é um nível de total despolitização, eles não têm um programa político.  Eles não podem ter um programa político, por isso começaram pela cultura, que é o setor mais despolitizado. É muito interessante, porque é uma militância do não político. Porque não tem causa.

É uma grande construção política. Agora, eles chegaram a um limite. Porque o que aconteceu em junho é um grande chacoalhão. É uma enorme politização da sociedade brasileira. Minha tia está falando de política, tá todo mundo falando de política, a sociedade se politizou. E como eles vão reagir frente a isso? Se politizar é tomar posição, apoiar isso ou aquilo, essa forma de organização. Eles defendem qual forma de organização?  São a favor do mercado privado, são a favor da socialização, são a favor do PT, contra o PT?  Nada, você nunca sabe.

É que isso é uma discussão muito “rancorosa”.

Exatamente… Mas você não sabe, e essa é exatamente a força e o limite do projeto político. Imagino que em algum momento eles vão dar um salto, porque isso vai chegar no limite, e como eles são muito habilidosos eles vão criar uma outra coisa a partir do que eles construíram.  Eles já tão muito perto desse limite.

Por exemplo, eles fizeram o Ninja, que é um relativo sucesso,  mas ao mesmo tempo é um fracasso, porque eles não tiveram papel ativo nessa mobilização.

Mas esse desafio que eles lançam é importante pro movimento autônomo, porque eles expõem como a gente é amador. Isso é outro assunto, mas os movimentos autônomos são muito, mas muito principistas.

Outro aprendizado do MPL: eles foram, falaram no Jornal Nacional, falaram no Roda Viva,  sem pudor, sentaram pra negociar no Conselho da Cidade. Porra, isso é um ganho. Se você olhar pra história desses movimentos, fazer isso com essa maturidade, com clareza, com estratégia, nada disso era possível no movimento antiglobalização. Essas coisas eram absolutamente necessárias. O que acontecia? Pessoas faziam isso nas costas do movimento.

Porque é necessário, como é que eu vou organizar um movimento global sem dinheiro? Tem que fazer compra internacional, imprimir material, pagar servidor de Internet. E aí como eu gerencio doações?  O movimento não quer decisões delicadas desse tipo. Então algumas pessoas faziam. “Recebemos uma doação de cinquenta mil dólares”. Alguém falou com alguém pra conseguir esse dinheiro, não cai do céu – foi feito nas costas do movimento. E isso sabotava a autonomia do movimento. O movimento antiglobalização, ao contrário do MPL, não falava com a imprensa. E aí alguém falava com a imprensa, porque tinha setores da imprensa que apoiavam o movimento. Porque o movimento não tem maturidade pra lidar com essas coisas, com dinheiro, falar com a imprensa, pressionar o governo, trabalhos necessários se você ta fazendo luta política.

Eu acho que o MPL deu um show de maturidade política em relação aos nossos padrões anteriores. Fizeram tudo que era necessário e o resultado está aí. 600 milhões por ano no bolso da classe trabalhadora! Isso não tem o que discutir. E não eram coisas assim terríveis. Dar entrevista pra deus e o mundo fez uma puta diferença. E tinha gente nos meios de comunicação que apoiava. E que quando a coisa virou e o editor permitiu fez coisa boa. Tem que explorar isso, muda muito. MPL mostrou maturidade, de mostrar que se leva a sério, e assim consegue efeito político.

Esse aprendizado a gente tem que incorporar, eu espero que a lição de junho não seja nós fomos às ruas e vencemos, que é parte da verdade, mas não a mais importante. Nós fomos às ruas e vencemos com estratégia e com maturidade política, o que é muito diferente. Que é o que o OcuppyWallStreet não fez, o 15M não fez, o movimento antiglobalização não fez.

Interessante também que em relação ao diálogo com o governo, eles aceitaram os convites mas não negociaram nada.

Faz parte da estratégia deles, mas eles podiam estar numa estratégia em que fosse estratégico negociar. Mas no caso deles a reivindicação era muito simples.  O que a gente quer é revogar o aumento. “Vem aqui, vamos discutir corredor de ônibus, licitações, municipalização do imposto da gasolina…” Não, revogação do aumento. É uma mensagem simples, do ponto de vista de comunicar com a população é claríssimo. E significa um rompimento de paradigma, a tarifa volta para trás e é isso que eles tavam querendo, o projeto deles é zero. E eles conseguiram colocar isso, ganharam a revogação e tarifa zero ta na boca do povo.

E tarifa zero é muita coisa, é mobilidade como direito social. E uma vez que você conquista isso você fala: nossa, que mais é direito social? Quero mais. Que mais como membro de uma coletividade eu tenho direito, só por ser membro dessa coletividade?

Arctic sea ice delusions strike the Mail on Sunday and Telegraph (Guardian)

Monday 9 September 2013 04.26 BST

Both UK periodicals focus on short-term noise and ignore the rapid long-term Arctic sea ice death spiral

Arctic iceberg

It’s only a matter of time before we experience an Arctic sea ice-free summer. Photograph: Jenny E Ross/Corbis

When it comes to climate science reporting, the Mail on Sunday and Telegraph are only reliable in the sense that you can rely on them to usually get the science wrong. This weekend’s Arctic sea ice articles fromDavid Rose of the Mail and Hayley Dixon at the Telegraph unfortunately fit that pattern.

Both articles claimed that Arctic sea ice extent grew 60 percent in August 2013 as compared to August 2012. While this factoid may be technically true (though the 60 percent figure appears to be an exaggeration), it’s also largely irrelevant. For one thing, the annual Arctic sea ice minimum occurs in September – we’re not there yet. And while this year’s minimum extent will certainly be higher than last year’s, that’s not the least bit surprising. As University of Reading climate scientist Ed Hawkins noted last year,

“Around 80% of the ~100 scientists at the Bjerknes [Arctic climate science] conference thought that there would be MORE Arctic sea-ice in 2013, compared to 2012.”

Regression toward the Mean

The reason so many climate scientists predicted more ice this year than last is quite simple. There’s a principle in statistics known as “regression toward the mean,” which is the phenomenon that if an extreme value of a variable is observed, the next measurement will generally be less extreme. In other words, we should not often expect to observe records in consecutive years. 2012 shattered the previous record low sea ice extent; hence ‘regression towards the mean’ told us that 2013 would likely have a higher minimum extent.

The amount of Arctic sea ice left at the end of the annual melt season is mainly determined by two factors – natural variability (weather patterns and ocean cycles), and human-caused global warming. The Arctic has lost 75 percent of its summer sea ice volume over the past three decades primarily due to human-caused global warming, but in any given year the weather can act to either preserve more or melt more sea ice. Last year the weather helped melt more ice, while this year the weather helped preserve more ice.

Last year I created an animated graphic called the ‘Arctic Escalator’ that predicted the behavior we’re now seeing from the Mail on Sunday and Telegraph. Every year when the weather acts to preserve more ice than the previous year, we can rely on climate contrarians to claim that Arctic sea ice is “rebounding” or “recovering” and there’s nothing to worry about. Given the likelihood that 2013 would not break the 2012 record, I anticipated that climate contrarians would claim this year as yet another “recovery” year, exactly as the Mail on Sunday and Telegraph have done.

Arctic sea ice extent data

Arctic sea ice extent data, 1980–2012. Data from NSIDC.In short, this year’s higher sea ice extent is merely due to the fact that last year’s minimum extent was record-shattering, and the weather was not as optimal for sea ice loss this summer. However, the long-term trend is one of rapid Arctic sea ice decline, and research has shown this is mostly due to human-caused global warming.

When Will the Arctic be Ice-Free?

Both Rose and Dixon referenced a 2007 BBC article quoting Professor Wieslaw Maslowski saying that the Arctic could be ice free in the summer of 2013. In a 2011 BBC article, he predicted ice-free Arctic seas by 2016 “plus or minus three years.” Other climate scientists believe this prediction is too pessimistic, and expect the first ice-free Arctic summersby 2040.

It’s certainly difficult to predict exactly when an ice-free Arctic summer will occur. While climate research has shown that the Arctic sea ice decline is mostly human-caused, there may also be a natural component involved. The remaining sea ice may abruptly vanish, or it may hold on for a few decades longer. What we do know is that given its rapid decline, an ice-free Arctic appears to be not a question of if, but when.

Continuing Global Warming

Both articles also claimed that “some scientists” are predicting that we’re headed into a period of global cooling. Both named just one scientist making this claim – Professor Tsonis of the University of Wisconsin,whose research shows that slowed global surface warming is only temporary. In fact, Tsonis’ co-author Kyle Swanson wrote,

“What do our results have to do with Global Warming, i.e., the century-scale response to greenhouse gas emissions? VERY LITTLE, contrary to claims that others have made on our behalf.”

Both articles also wrongly claimed that global warming has “paused” since 1997. In reality, global surface temperatures have warmed over the past 15 years, albeit more slowly than during the previous 15 years. It is possible to cherry pick a shorter time frame over which global surface temperatures haven’t warmed, as I illustrated in my other animated ‘Escalator’ graphic.

Average of NASA GISS, NOAA NCDC, and HadCRUT4 monthly global surface temperature anomalies from January 1970 through November 2012 (green) with linear trends applied to the timeframes Jan '70 - Oct '77, Apr '77 - Dec '86, Sep '87 - Nov '96, Jun '97 - Dec '02, and Nov '02 - Nov '12.

Average of NASA GISS, NOAA NCDC, and HadCRUT4 monthly global surface temperature anomalies from January 1970 through November 2012 (green) with linear trends applied to the timeframes Jan ’70 – Oct ’77, Apr ’77 – Dec ’86, Sep ’87 – Nov ’96, Jun ’97 – Dec ’02, and Nov ’02 – Nov ’12.However, the opposite is true of the overall warming of the planet – Earth has accumulated more heat over the past 15 years than during the prior 15 years.

Global heat content data from Nuccitelli et al. 2013

Global heat accumulation data (ocean heating in blue; land, atmosphere, and ice heating in red) from Nuccitelli et al. (2012). Recent research strongly suggests that the main difference between these two periods comes down to ocean heat absorption. Over the past decade, heat has been transferred more efficiently to the deep oceans, offsetting much of the human-caused warming at the surface. During the previous few decades, the opposite was true, with heat being transferred less efficiently into the oceans, causing more rapid warming at the surface. This is due to ocean cycles, but cycles are cyclical – meaning it’s only a matter of time before another warm cycle occurs, causing accelerating surface warming (as Tsonis’ research shows).

It would be foolhardy for anyone to predict future global cooling, and those few who are so foolish are unwilling to put their money where their mouth is, as my colleague John Abraham found out when challenging one to a bet, only to find the other party unwilling to stand behind it.

Rose and Dixon Invent an IPCC ‘Crisis Meeting’

Both articles also claimed the Intergovernmental Panel on Climate Change (IPCC), whose Fifth Assessment Report is due out in a few weeks, has been forced “to hold a crisis meeting.” This claim made both articles even though Ed Hawkins noted,

“I told David Rose on the phone and by email on Thursday about the IPCC process and lack of ‘crisis’ meeting.”

Unfortunately that didn’t stop Rose from inventing this meeting, or Dixon from repeating Rose’s fictional reporting in the Telegraph.

Yes, Humans are Driving Global Warming

Finally, both articles quoted climate scientist Judith Curry claiming that the anticipated IPCC statement of 95 percent confidence that humans are the main cause of the current global warming is unjustified. However, Curry has no expertise in global warming attribution, and has a reputation for exaggerating climate uncertainties. In reality, the confident IPCC statement is based on recent global warming attribution research. More on this once the IPCC report is actually published – any current commentaries on the draft report are premature.

Shoddy Climate Reporting

These two articles at the Mail on Sunday and Telegraph continue the unfortunate trend of shoddy climate reporting in the two periodicals,particularly from David Rose. They suffer from cherry picking short-term data while ignoring the long-term human-caused trends, misrepresenting climate research, repeating long-debunked myths, and inventing IPCC meetings despite being told by climate scientists that these claims are pure fiction.

Based on their history of shoddy reporting, the safest course of action when reading a climate article in the Mail on Sunday or Telegraph is to assume they’re misrepresentations or falsehoods until you can verify the facts therein for yourself.

*   *   *

Global warming? No, actually we’re cooling, claim scientists (Telegraph)

A cold Arctic summer has led to a record increase in the ice cap, leading experts to predict a period of global cooling.

Global warming? No, actually we're cooling, claim scientists

Major climate research centres now accept that there has been a “pause” in global warming since 1997.  Photo: ALAMY

9:55AM BST 08 Sep 2013

There has been a 60 per cent increase in the amount of ocean covered with ice compared to this time last year, the equivalent of almost a million square miles.

In a rebound from 2012’s record low, an unbroken ice sheet more than half the size of Europe already stretches from the Canadian islands to Russia’s northern shores, days before the annual re-freeze is even set to begin.

The Northwest Passage from the Atlantic to the Pacific has remained blocked by pack-ice all year, forcing some ships to change their routes.

A leaked report to the UN Intergovernmental Panel on Climate Change (IPCC) seen by the Mail on Sunday, has led some scientists to claim that the world is heading for a period of cooling that will not end until the middle of this century.

If correct, it would contradict computer forecasts of imminent catastrophic warming. The news comes several years after the BBC predicted that the arctic would be ice-free by 2013.

The original predictions led to billions being invested in green measures to combat the effects of climate change.

The changing predictions have led to the UN’s climate change’s body holding a crisis meeting, and the IPCC is due to report on the situation in October. A pre-summit meeting will be held later this month.

But the leaked documents are said to show that the governments who fund the IPCC are demanding 1,500 changes to the Fifth Assessment Report – a three-volume study issued every six or seven years – as they claim its current draft does not properly explain the pause.

The extent to which temperatures will rise with carbon dioxide levels and how much of the warming over the past 150 years, a total of 0.8C, is down to human greenhouse gas emissions are key issues in the debate.

The IPCC says it is “95 per cent confident” that global warming has been caused by humans – up from 90 per cent in 2007 – according to the draft report.

However, US climate expert Professor Judith Curry has questioned how this can be true as that rather than increasing in confidence, “uncertainty is getting bigger” within the academic community.

Long-term cycles in ocean temperature, she said, suggest the world may be approaching a period similar to that from 1965 to 1975, when there was a clear cooling trend.

At the time some scientists forecast an imminent ice age.

Professor Anastasios Tsonis, of the University of Wisconsin, said: “We are already in a cooling trend, which I think will continue for the next 15 years at least. There is no doubt the warming of the 1980s and 1990s has stopped.”

The IPCC is said to maintain that their climate change models suggest a pause of 15 years can be expected. Other experts agree that natural cycles cannot explain all of the recorded warming.

Related Articles

Anthropology and the Anthropocene (Anthropology News)

Anthropology and Environment Society

September 2013

Amelia Moore

“The Anthropocene” is a label that is gaining popularity in the natural sciences.  It refers to the pervasive influence of human activities on planetary systems and biogeochemical processes.   Devised by Earth scientists, the term is poised to formally end the Holocene Epoch as the geological categorization for Earth’s recent past, present, and indefinite future.  The term is also poised to become the informal slogan of a revitalized environmental movement that has been plagued by popular indifference in recent years.

Climate change is the most well known manifestation of anthropogenic global change, but it is only one example of an Anthropocene event.  Other examples listed by the Earth sciences include biodiversity loss, changes in planetary nutrient cycling, deforestation, the hole in the ozone layer, fisheries decline, and the spread of invasive species.  This change is said to stem from the growth of the human population and the spread of resource intensive economies since the Industrial Revolution (though the initial boundary marker is in dispute with some scientists arguing for the Post-WWII era and others for the advent of agriculture as the critical tipping point).  Whatever the boundary, the Anthropocene signifies multiple anthropological opportunities.

What stance should we, as anthropologists, take towards the Anthropocene? I argue that there are two (and likely more), equally valid approaches to the Anthropocene: anthropology in the Anthropocene and anthropology of the Anthropocene.  Anthropology in the Anthropocene already exists in the form of climate ethnography and work that documents the lived experience of global environmental change.  Arguably, ethnographies of protected areas and transnational conservation strategies exemplify this field as well.  Anthropology in the Anthropocene is characterized by an active concern for the detrimental affects of anthropogenesis on populations and communities that have been marginalized to bear the brunt of global change impacts or who have been haphazardly caught up in global change solution strategies.  This work is engaged with environmental justice and oriented towards political action.

Anthropology of the Anthropocene is much smaller and less well known than anthropology in the Anthropocene, but it will be no less crucial.  Existing work in this vein includes those who take a critical stance towards climate science and politics as social processes with social consequences.  Beyond deconstruction, these critical scholars investigate what forms scientific and political assemblages create and how they participate in remaking the world anew.  Other existing research in this mode interrogates the idea of biodiversity and the historical and cultural context for the notion of anthropogenesis itself.  In the near future, we will see more work that can enquire into both the sociocultural and socioecological implications and manifestations of Anthropocene discourse, practice and logic.

I have only created cursory sketches of anthropology in the Anthropocene and anthropology of the Anthropocene here.  However, these modes are not at all mutually exclusive, and they should inspire many possibilities for future work.  The centrality of anthropos, the idea of the human, within the logics of the Anthropocene is an invitation for anthropology to renew its engagements with the natural sciences in research collaborations and as the object of research, especially the ecological and Earth sciences.

For starters, we should consider the implications of the Anthropocene idea for our understandings of history and collectivity.  If the natural world is finally gaining recognition within the authoritative sciences as intimately interconnected with human life such that these two worlds cease to be separate arenas of thought and action or take on different salience, then both the Humanities and the natural sciences need to devise more appropriate modes of analysis that can speak to emergent socioecologies.  This has begun in anthropology with some recent works of environmental health studies, political ecology, and multispecies ethnography, but is still in its infancy.

In terms of opportunities for legal and political engagement, the Anthropocene signifies possibilities for reconceptualizing environmentalism, conservation and development.  Anthropologists should be cognizant of new design paradigms and models for organizing socioecological collectives from the urban to the small island to the riparian.  We should also be on the lookout for new political collaborations and publics creating conversations utilizing multiple avenues for communication in the academic realm and beyond.  Emergent asymmetries in local and transnational markets and the formation of new multi-sited assemblages of governance should be of special importance.

In terms of science, the Anthropocene signals new horizons for studying and participating in global change science.  The rise of interdisciplinary socioecology, the biosciences of coupled natural and human complexity, geoengineering and the biotech interest in de-extinction are just a sampling of important transformations in research practices, research objects, and the shifting boundaries between the lab and the field.  Ongoing scientific reorientation will continue to yield new arguments about emergent forms of life that will participate in the creation of future assemblages, publics, and movements.

I would also like to caution against potentially unhelpful uses of the Anthropocene idea.  The term should not become a brand signifying a specific style of anthropological research.  It should not gloss over rigid solidifications of time, space, the human, or life.  We should not celebrate creativity in the Anthropocene while ignoring instances of stark social differentiation and capital accumulation, just as we should not focus on Anthropocene assemblages as only hegemonic in the oppressive sense.   Further, we should be cautious with our utilization of the crisis rhetoric surrounding events in the Anthropocene, recognizing that crisis for some can be turned into multiple forms of opportunity for others.  Finally, we must admit the possibility that the Anthropocene may not succeed in gaining lasting traction through formal designation or popularization, and we should not overstate its significance by assuming its universal acceptance.

In the next year, the Section News Column of the Anthropology and Environment Society will explore news, events, projects, and arguments from colleagues and students experimenting with various framings of the Anthropocene in addition to its regular content.  If you would like to contribute to this column, please contact Amelia Moore at a.moore4@miami.edu.

– See more at: http://www.anthropology-news.org/index.php/2013/09/09/anthropology-and-the-anthropocene/#sthash.vBo1RtuY.dpuf

Climate Change’s Silver Bullet? Our Interview With One Of The World’s Top Geoengineering Scholars (Climate Progress)

BY ARI PHILLIPS ON SEPTEMBER 6, 2013 AT 1:10 PM

Clive HamiltonMELBOURNE, Australia — Since coming to Australia almost two months ago I’ve heard about Clive Hamilton in the process of reporting just about every story I’ve done. Then I picked up his new book Earthmasters: The Dawn of the Age of Climate Engineeringand now I see what all the fuss is about.

In all of the debates over how to address climate change, climate engineering — or geoengineering — is among the most contentious. It involves large-scale manipulation of the Earth’s climate using grand technological interventions, such as fertilizing the oceans with iron to absorb carbon dioxide or releasing sulfur into the atmosphere to reduce radiation. While its proponents call geoengineering a silver bullet for our climate woes, its skeptics are far more critical. Joe Romm, for one, likens geoengineering to a dangerous course of chemotherapy and radiation to treat a condition curable through diet and exercise — or, in this case, emissions reduction.

According to the cover of Hamilton’s new book, “The potential risks are enormous. It is messing with nature on a scale we’ve never seen before, and it’s attracting a flood of interest from scientists, venture capitalists and oil companies.”

Hamilton is an Australian author and public intellectual. Until 2008 he was the Executive Director of The Australia Institute, a progressive think tank that he founded in 1993. Now he’s Professor of Public Ethics at the Centre for Applied Philosophy and Public Ethics, a joint center of Charles Stuart University and the University of Melbourne.

His books include Requiem for a Species: Why we resist the truth about climate changeScorcher: The dirty truth about climate change and Growth Fetishamongst others.

Hamilton’s next book will be about the anthropocene — a new geologic era in which human activities have had a significant impact on the Earth’s ecosystems. He took some time to talk with me about this new era, the future of geoengineering and what it all means for humanity. This interview has been edited for clarity and length.

How has the environmental community responded to your book on geoengineering?

Cover-image-Yale-UP3

I remember back in late 1990s around Kyoto there was a great deal of resistance amongst environmentalists and climate activists, including myself, against any talk of adaptation. It was seen to be a capitulation to a kind of defeatism that we ought not to be talking about adaptation because that means that mitigation has failed. Eventually I think we all came around to view that some climate change is going to happen and therefore adaptation has to be considered. It’s better to have seat at the table, as it were, when adaptation is being discussed.

I think we’re in the same stage now with geoengineering. Most environmentalists don’t want to know about it. Most climate activists don’t want to talk about it. There is a sense that in doing so you are conceding that it could well be possible that geoengineering will be necessary because the world community will continue to fail, perhaps even more egregiously, at responding to scientific warnings.

But I wrote the book because I became aware in writing my previous book that the genie was out of the bottle: geoengineering was going to grow in importance. Therefore, climate campaigners and environmental groups sooner or later are going to have to engage in the issue. It’s a question of whether they start now or leave it for another five years, at which point the lobby backing geoengineering will be much more powerful and will have had an opportunity to frame it more inflexibly in the media and in broader public mind.

Did you come across any big surprises while writing the book?

There were a couple of big surprises. One was the extent of the geoengineering lobby and the links between the scientists and the investors. I developed a much stronger sense of the likelihood of a powerful geoengineering constituency emerging, which would — if it were not countered by a skeptical community of thinkers and campaigners — essentially take control of whole agenda. Plotting those links and laying them out was something that I go into quite a lot of detail over. At the same time it stimulated me to think about the military-industrial complex, the famous lobby group that help such sway in the U.S. in the middle of the 20th century.

One thing I noticed while doing this research and looking at scientists involved was the density of the linkages with the Lawrence Livermore National Laboratory. So I investigated further and thought it’s really quite astonishing the extent to which many, if not most, prominent scientific researchers in geoengineering in the U.S. worked at Livermore or have close links with people there now or those who used to work there.

Then when I read Hugh Gusterson’s book on Livermore and it’s role in the cold war and nuclear weapons development, I started to think much more carefully about the type of mindset that is especially drawn to geoengineering as a technological response to global warming. I think it’s quite alarming in its implications. That lead me to further think about the geostrategic implications of climate engineering, which is something that’s received almost no attention, but we do know that people in the military and related strategic communities are starting to think about geoengineering and what it would mean for international relations and conflict.

What about the potential for financial gain?

A noble desire to save the world from climate change will attract less noble intentions. That’s just the way of the world. Never let a good crisis go to waste. Already we’re seeing it with Canadian oil sands billionaire Murray Edwards investing in geoengineering technologies.

I spend quite a bit of time talking about Bill Gates in the book. Earlier this year, I was talking about the scientific entrepreneurial lobby group that was emerging during a debate with Peter Singer (another Australian philosopher) and I mentioned Bill Gates. Singer said, “Well what’s wrong with Bill Gates? He’s well motivated. He does a lot of good charity work. If you’re going to have millionaires investing in geoengineering then Bill Gates would be one of the first.”

I made the point that yes, Bill Gates is now in philanthropic mode, and The Bill and Melinda Gates Foundation does praiseworthy work, but it’s not Bill Gates’ motives I’m worried about; it’s his worldview. That Silicon Valley, ‘we’ve-got-an-app-for-that’ kind of understanding. Joe Romm (Editor of Climate Progress) has been very critical of Gates for his dissing of renewable energy technology. Gates described solar energy as cute.

So Gates is drawn to big, new, shiny technological responses that clever people dream up. You know, brainstorming over pizza and coke. That’s where he comes from and that’s how he thinks. It’s one thing to think about computers and software that way, but it’s a completely different matter to think about the earth as a whole as in need of a snazzy new app that will solve the problem. I think that’s an extremely dangerous way to understand it for all sorts of reasons.

Perhaps the most important of which is that the climate change problem is not a technological one. It’s a social and political one. The more people focus on techno-fixes, the more they distract us from the real problems, which are the social and political difficulties of responding to climate change. We have the technology and have had it for many years. So arguing that the blockage is the absence of technology is extremely unhelpful and plays into the hands of the fossil fuel lobby.

In the book you say the slippery slope to a techno-fix promises a substitute for the slippery slope to revolution.

Revolutions take all different forms of course, from the industrial revolution to Tahrir Square to the cultural revolution of the 60s and 70s, and it’s more along the lines of the latter that I was thinking of, and into which new forms of climate activism can feed. There’s often a sort of terror in environmental groups that if they do something radical or outrageous, they’ll alienate mom and dad in the suburbs.

But social movements that have radically changed the way our world works — think of the woman’s movement — have frequently started out being rancorous and difficult and attracting the derision of the conservative press and politicians. And indeed, mystifying and often alienating people living in the suburbs or high-rises. If people have to be shocked and outraged before they come around to seeing that some fundamental transformation is necessary, then so be it. I think that there’s a level of fear and complacency and unwillingness to shift to change on the part of our societies, and some kind of circuit breaker is necessary.

You talk about the acceptance of the “solution” of geoengineering even by people who don’t seem to think climate change is a problem in the first place.

That’s one of the, on the face of it, mystifying aspects of the geoengineering debate. Why conservative think tanks like The American Enterprise Institute, The Cato Institute and even The Heartland Institute, which have for years worked hard to deny climate science and block all measures to reduce carbon emissions, have come out in favor of geoengineering.

What it shows us is that the debate over climate change and the role of the deniers is not about the science. They want to make it about the science because that gives it an air of legitimacy, but it’s really about fundamental cultural and political values. So if geoengineering is the solution then they’re happy to concede that there’s a problem because geoengineering is a big, technological, macho, system-justifying response to climate change. And that’s the kind of response that fits with their political orientation.

How does the “American Way of Life” factor into all of this?

Already we’re seeing what authorities in the U.S. need to do in order to protect people from massive hurricanes and monster wildfires and frequent floods. As the effects of climate change become even more severe we’ll see nations like the U.S. that are in a position to adapt start spending billions of dollars dong so.

And of course, the more the climate deniers persuade politicians that hurricanes and wildfires aren’t due to climate change, the less responsive authorities are likely to be and the more people will die, in effect. Eventually it will be impossible to continue to pretend that mitigation is not the first best option. It might take five years, it might take ten, let’s hope it doesn’t take twenty.

The sort of tragedy of all this is that if the world had become serious ten or so years ago that cost would have been vastly smaller than it’s going to be. But you know that’s in a perfect world where human beings are rational and take reasonable measures to protect themselves from the warnings of scientists. But we now know that the enlightenment conception of human beings as rational creatures who assess the evidence and take measures to protect themselves from harm, that has now collapsed before us. We can no longer maintain that belief.

That feeds into this idea of the anthropocene and the ethical implications of living in a world with climate change.

I am writing a book about the anthropocene because it seems to me that when human beings become so powerful that they transform the fundamental cycles and processes that govern the evolution of the earth itself that we’re entering into an era, or we’ve reached an event, that’s as significant as the industrial revolution, or even the process of civilization itself.

It causes us to rethink pretty much everything. It certainly causes us to rethink what the relationship of human beings is to the planet, but in a harder way, what is a human being. The sort of modern conception of what a human being is is an isolated ego existing inside a body. And most of us think that’s just what we are. But in fact that’s a very recent and culturally specific understanding of what a human being is. And it’s the conception of a human being that’s consistent with an advanced consumer society.

Collectively though, we are the kind of creatures, like certain types of microbes, that can completely transform the nature of the planet on which we live. If this is so, then it causes us to rethink who we are and what the place of this strange, clever creature is on planet earth.

We can no longer think of the Earth as the passive and unresponsive backdrop to the human drama where we play out our parts in a kind of Shakespearean play and not worry about the backdrop. We now find that the backdrop, the stage scenery, has entered into the play and is disrupting the whole proceedings.

Something very profound has happened. Human history, which we think of as only being a few thousand years old and is the history of human actions, has converged with geologic history, which we always thought of as operating in a very distinct domain having nothing to do with us. But now we find that our history affects the history of the earth.

If there is no more human history distinct from earth history, then what does that mean?

Vida de princesa nos relatórios de sustentabilidade (Envolverde)

02/9/2013 – 11h01

por Carla Stoicov e Wilson Bispo, da Tistu

sustentabilidadeempresas Vida de princesa nos relatórios de sustentabilidade

Empresas ainda não entendem o motivo de seus relatórios de sustentabilidade não serem objeto de leitura dos seus stakeholders. Produzido no tradicional modelo top-down de comunicação, ainda não se deram conta que esse produto deve ter seus temas escolhidos não por eles, mas pelos seus públicos, num processo de diálogo que deve se dar ao longo do ano. Mas isso pode estar prestes a mudar e você deveria ser parte disso.

No mês passado participamos de um dia onde foram apresentadas e dialogadas as novidades do G4 Sustainability Reporting Guidelines. Na formação (apesar de achar que o tema está muito cru para chamarmos de curso) estavam empresas, consultores e representantes de entidades/federações.

Os dois aspectos mais marcantes da nova versão – que passam a valer a partir de Janeiro de 2016 (1) – são a obrigatoriedade de ter uma matriz de materialidade e de demonstrar que a empresa tem conhecimento dos impactos dentro e fora dela, colocando na pauta o tema cadeia de fornecimento. Na pesquisa Materialidade Brasil, elaborada pela consultoria Report Sustentabilidade, foi constatado que 85% das empresas publicaram quais são os temas materiais, mas apenas 61% publicou sua matriz de materialidade. Um número menor ainda (45%) publicou metas atreladas aos temas materiais (ou seja, apesar de material, 55% das empresas entendeu que ainda não era o momento de atribuir metas).

Algumas lacunas ainda são imperdoáveis no G4. Nenhum avanço na proposição de metodologia para se fazer um processo de materialidade e em como equilibrar os aspectos trazidos pelos stakeholders internos x externos. A Takao Consultoria elaborou um Manual para Implementação de Engajamento com Stakeholders. O documento propõe matrizes de priorização e perfil de partes interessadas e também exemplifica uma matriz de priorização de temas em relação aos critérios internos e externos. Pode ser um ótimo modelo a ser seguido, mas quando lemos um relatório de sustentabilidade muitas vezes não está explícito como foi feita a seleção e priorização dos públicos a serem consultados, se todos participaram juntos ou não, se os temas foram dados ou abriu-se a opções para temas que emergiram no processo e como se chegou a priorização dos temas.

Outro ponto é que a materialidade deve envolver stakeholders, mas não obriga a participação de determinadas partes interessadas. Dá para ficar apenas com os funcionários ou até mesmo não incluí-los.

Aviary Photo 130224803379524387 Vida de princesa nos relatórios de sustentabilidade

Pesquisa Materialidade Brasil. Foto: Report Sustentabilidade

A nova versão, apesar de falar muito em cadeia de fornecimento, ainda está longe de propor um olhar mais sistêmico, amplo, envolvendo toda a cadeia de valor da organização (que contemple também a distribuição, clientes e consumidores). Ou seja, a empresa tem que ficar atenta de quem ela compra, onde estão estes fornecedores e onde estão localizados os impactos na cadeia de fornecimento. Contudo, ainda está livre para vender para quem quiser! Levaram o conceito de esfera de influência da Norma ISO 26000 apenas para parte da cadeia.

“Esfera de influência: amplitude/extensão de relações políticas, contratuais, econômicas ou outras relações por meio das quais uma organização tem a capacidade de afetar as decisões ou atividades de indivíduos ou organizações. – Norma ISO 26000″

Com isso, ficam de fora as preocupações da empresa com a comercialização de seus produtos ou serviços para, por exemplo, países que têm graves violações dos direitos humanos ou que estejam em guerra civil; organizações envolvidas em corrupção ou lavagem de dinheiro; empresas que desmatam, têm trabalho infantil ou análogo ao escravo em sua operação ou na sua cadeia, etc.

As organizações presentes na formação da qual participamos também questionaram se a GRI tem alguma sinalização sobre como tornar os relatórios mais atrativos, mais lidos. Bem, não entendemos que isso seja uma missão da GRI, mas sim, em primeira instância, das próprias empresas. O relatório será interessante pela qualidade e relevância das informações ali colocadas. Isso nos remete aos motivos de alguém “curtir” no Facebook a página oficial de uma empresa quando há interesse genuíno, e não quando é feito para participar de uma oferta comercial. Nós seguimos várias organizações e o que elas nos oferecem é informação de qualidade, independentemente do seu produto ou serviço.

O blog Testando os Limites da Sustentabilidade (um tipo de watchdog) lê e analisa relatórios de sustentabilidade das empresas e depois disso encaminha perguntas sobre informações incompletas ou imprecisas, apontando lacunas de temas que deveriam ser abordados conforme o negócio da empresa. Muitas respondem ao blog e deveriam ver os questionamentos feitos com bons olhos: afinal alguém está lendo seu relatório!

Contudo, há uma grande lacuna deixada por stakeholders imprescindíveis para a melhoria da qualidade das informações e da transparência do setor privado no Brasil, como organizações da sociedade civil, imprensa, academia, consultorias, organizações think tank, coletivos e formadores de opinião em geral. Céticos quanto ao conteúdo publicado nos relatos — em alguma medida, com razão –, esses públicos deixam de prestar um enorme serviço à sociedade ao não fazer leituras e análises críticas às informações dos relatórios de sustentabilidade, um dos poucos, se não os únicos, instrumentos de consulta de como as corporações contam estar conduzindo os negócios das empresas por aqui.

Acreditamos que existe um grande espaço para exercitar diferentes formas de se fazer a leitura desse tipo de documento. Esses stakeholders têm condições técnicas e informações complementares para “mastigar” o conteúdo dos relatórios e fazer cruzamentos com a real atuação da empresa, com o que a GRI determina, com práticas de outras empresas do mesmo setor, com políticas públicas, com Pactos e Compromissos voluntários, comparar com informações e práticas da matriz na busca por um duplo padrão (2), dentre outras dezenas de olhares possíveis.

Analisar as informações públicas do setor privado — ou evidenciar a falta delas — ajuda na geração de conhecimento crítico que pode ser um impulsionador de novas práticas por parte das empresas. É o caso do estudo Sustentabilidade do Setor Automotivo, produzido pela Tistu para o UniEthos, que vem sendo utilizado por uma montadora na sua estratégia de sustentabilidade.

Por isso, o que realmente importa nos relatórios são os dados relevantes para quem lê, não para quem escreve. Entretanto, as informações ainda vêm embaladas num pacote desnecessário de frases de efeito que dizem pouco, ou quase nada, e não agregam no momento da análise, repetindo histórias ano após ano sem demonstrar ou deixar clara qual foi a real evolução frente ao ano anterior. Daí a importância da análise crítica de formadores de opinião. Enquanto as empresas não avançam neste aspecto, iniciativas que capturem os dados e “limpem” as informações dos excessos, serão úteis para aumentar o conhecimento sobre os aspectos de sustentabilidade que as empresas estão colocando na sua cesta de prioridades.

Em tempo de manifestações onde cartazes levantam bandeiras como saúde (a ser melhorada), corrupção (a ser combatida), transporte (como forma de inclusão) e acesso à cidade (como forma de promoção da igualdade), quem não gostaria de saber quais empresas estão antenadas com essas necessidades e trabalham em convergência com as políticas públicas? Não passa pela nossa cabeça abrir o relatório da Siemens no ano que vem e não ver a questão da corrupção e cartel. Ou ler o relatório da Samsung e não encontrar nada sobre as condições degradantes dos trabalhadores. Mas não estamos falando daqueles textos sobre o quanto valorizam os processos, como os sistemas funcionam etc. Queremos saber justamente o contrário. Quais foram as lições aprendidas, onde estava o furo, quais desafios que entendem que estão longe de superar? As empresas são feitas de pessoas e portanto são cheias de falhas, inconsistências, dilemas. E é isso que falta aparecer nos seus relatos de vida de princesa.

Talvez esta deva ser a tendência dos relatos das empresas. Criar visões por assuntos de interesse pela ótica de quem busca a informação, com construções e atualizações dinâmicas (entenda aqui que elas não serão feitas de forma unilateral, apenas pelas empresas), com muito menos filtros e fotos de banco de imagens.

Mais Narrativas Independentes, Jornalismo e Ação (NINJA) para os relatórios de sustentabilidade têm o potencial de torná-los muito mais interessantes e vivos. Se as empresas não fizerem, alguém vai fazer.

(1) Ou seja, o último ano de relato na versão G3 ou G3.1 será 2014 sendo que 2015 já será relatado na nova versão.

(2) Esta é uma metodologia que a Tistu vem adotando em alguns estudos. Isolar alguns assuntos e procurar por práticas e posicionamento na matriz e na operação do Brasil. Quando há divergência (i.e. a matriz tem políticas, programas ou é estratégico e aqui nem é citado) ocorre o que denominamos de Duplo Padrão.

Carla Stoicov é mestranda em Gestão e Políticas Públicas pela FGV-SP. Sócia da Tistu, atua como consultora em projetos para o Desenvolvimento Sustentável e de Responsabilidade Social Empresarial. Foi coordenadora do Programa Tear do Instituto Ethos e é especialista do UniEthos.

Wilson Bispo é jornalista e desde de 2005 trabalha na cobertura de temas socioambientais e de RSE. Sócio da Tistu, foi produtor do Repórter Eco da TV Cultura de SP, editor do portal e agência Envolverde e consultor na Report Sustentabilidade.

** Publicado originalmente no site Tistu.

Language and Tool-Making Skills Evolved at the Same Time (Science Daily)

Sep. 3, 2013 — Research by the University of Liverpool has found that the same brain activity is used for language production and making complex tools, supporting the theory that they evolved at the same time.

Three hand axes produced by participants in the experiment. Front, back and side views are shown. (Credit: Image courtesy of University of Liverpool)

Researchers from the University tested the brain activity of 10 expert stone tool makers (flint knappers) as they undertook a stone tool-making task and a standard language test.

Brain blood flow activity measured

They measured the brain blood flow activity of the participants as they performed both tasks using functional Transcranial Doppler Ultrasound (fTCD), commonly used in clinical settings to test patients’ language functions after brain damage or before surgery.

The researchers found that brain patterns for both tasks correlated, suggesting that they both use the same area of the brain. Language and stone tool-making are considered to be unique features of humankind that evolved over millions of years.

Darwin was the first to suggest that tool-use and language may have co-evolved, because they both depend on complex planning and the coordination of actions but until now there has been little evidence to support this.

Dr Georg Meyer, from the University Department of Experimental Psychology, said: “This is the first study of the brain to compare complex stone tool-making directly with language.

Tool use and language co-evolved

“Our study found correlated blood-flow patterns in the first 10 seconds of undertaking both tasks. This suggests that both tasks depend on common brain areas and is consistent with theories that tool-use and language co-evolved and share common processing networks in the brain.”

Dr Natalie Uomini from the University’s Department of Archaeology, Classics & Egyptology, said: “Nobody has been able to measure brain activity in real time while making a stone tool. This is a first for both archaeology and psychology.”

The research was supported by the Leverhulme Trust, the Economic and Social Research Council and the British Academy. It is published in PLOS ONE.

Journal Reference:

  1. Natalie Thaïs Uomini, Georg Friedrich Meyer. Shared Brain Lateralization Patterns in Language and Acheulean Stone Tool Production: A Functional Transcranial Doppler Ultrasound StudyPLoS ONE, 2013; 8 (8): e72693 DOI: 10.1371/journal.pone.0072693

Primate Calls, Like Human Speech, Can Help Infants Form Categories (Science Daily)

Sep. 2, 2013 — Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Mantled howler (Alouatta seniculus) howling. (Credit: © michaklootwijk / Fotolia)

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

“We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language,” Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

“These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations,” said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

“Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins,” asks Waxman, “or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?”

Journal Reference:

  1. Alissa Ferry et al. Non-human primate vocalizations support categorizations in very young human infants.Proceedings of the National Academy of Sciences, September 3, 2013

The Myth of ‘Environmental Catastrophism’ (Monthly Review)

Between October 2010 and April 2012, over 250,000 people, including 133,000 children under five, died of hunger caused by drought in Somalia. Millions more survived only because they received food aid. Scientists at the UK Met Centre have shown that human-induced climate change made this catastrophe much worse than it would otherwise have been.1

This is only the beginning: the United Nations’ 2013 Human Development Report says that without coordinated global action to avert environmental disasters, especially global warming, the number of people living in extreme poverty could increase by up to 3 billion by 2050.2 Untold numbers of children will die, killed by climate change.

If a runaway train is bearing down on children, simple human solidarity dictates that anyone who sees it should shout a warning, that anyone who can should try to stop it. It is difficult to imagine how anyone could disagree with that elementary moral imperative.

And yet some do. Increasingly, activists who warn that the world faces unprecedented environmental danger are accused of catastrophism—of raising alarms that do more harm than good. That accusation, a standard feature of right-wing attacks on the environmental movement, has recently been advanced by some left-wing critics as well. While they are undoubtedly sincere, their critique of so-called environmental catastrophism does not stand up to scrutiny.

From the Right…

The word “catastrophism” originated in nineteenth-century geology, in the debate between those who believed all geological change had been gradual and those who believed there had been episodes of rapid change. Today, the word is most often used by right-wing climate change deniers for whom it is a synonym for “alarmism.”

  • The Heartland Institute: “Climate Catastrophism Picking Up Again in the U.S. and Across the World.”3
  • A right-wing German blog: “The Climate Catastrophism Cult.”4
  • The Australian journal Quadrant: “The Chilling Costs of Climate Catastrophism.”5

Examples could be multiplied. As environmental historian Franz Mauelshagen writes, “In climate denialist circles, the word ‘climate catastrophe’ has become synonymous with ‘climate lie,’ taking the anthropogenic green house effect for a scam.”6

Those who hold such views like to call themselves “climate change skeptics,” but a more accurate term is “climate science deniers.” While there are uncertainties about the speed of change and its exact effects, there is no question that global warming is driven by greenhouse-gas emissions caused by human activity, and that if business as usual continues, temperatures will reach levels higher than any seen since before human beings evolved. Those who disagree are not skeptical, they are denying the best scientific evidence and analysis available.

The right labels the scientific consensus “catastrophism” to belittle environmentalism, and to stifle consideration of measures to delay or prevent the crisis. The real problem, they imply, is not the onrushing train, but the people who are yelling “get off the track!” Leaving the track would disrupt business as usual, and that is to be avoided at all costs.

…And From the Left

Until very recently, “catastrophism” as a political expression was pretty much the exclusive property of conservatives. When it did occur in left-wing writing, it referred to economic debates, not ecology. But in 2007 two quite different left-wing voices almost simultaneously adopted “catastrophism” as a pejorative term for radical ideas about climate change they disagreed with.

The most prominent was the late Alexander Cockburn, who in 2007 was writing regularly forThe Nation and coediting the newsletter CounterPunch. To the shock of many of his admirers, he declared that “There is still zero empirical evidence that anthropogenic production of CO2 is making any measurable contribution to the world’s present warming trend,” and that “the human carbon footprint is of zero consequence.”7 Concern about climate change was, he wrote, the result of a conspiracy “between the Greenhouser fearmongers and the nuclear industry, now largely owned by oil companies.”8

Like critics on the right, Cockburn charged that the left was using climate change to sneak through reforms it could not otherwise win: “The left has bought into environmental catastrophism because it thinks that if it can persuade the world that there is indeed a catastrophe, then somehow the emergency response will lead to positive developments in terms of social and environmental justice.”9

While Cockburn’s assault on “environmental catastrophism” was shocking, his arguments did not add anything new to the climate debate. They were the same criticisms we had long heard from right-wing deniers, albeit with leftish vocabulary.

That was not the case with Leo Panitch and Colin Leys. These distinguished Marxist scholars are by no means deniers. They began their preface to the 2007 Socialist Register by noting that “environmental problems might be so severe as to potentially threaten the continuation of anything that might be considered tolerable human life,” and insisting that “the speed of development of globalized capitalism, epitomized by the dramatic acceleration of climate change, makes it imperative for socialists to deal seriously with these issues now.”

But then they wrote: “Nonetheless, it is important to try to avoid an anxiety-driven ecological catastrophism, parallel to the kind of crisis-driven economic catastrophism that announces the inevitable demise of capitalism.”10 They went on to argue that capitalism’s “dynamism and innovativeness” might enable it to use “green commerce” to escape environmental traps.

The problem with the Panitch–Leys formulation is that the threat of ecological catastrophe isnot “parallel” to the view that capitalism will destroy itself. The desire to avoid the kind of mechanical determinism that has often characterized Marxist politics, where every crisis was proclaimed to be the final battle, led these thoughtful writers to confuse two very different kinds of catastrophe.

The idea that capitalism will inevitably face an insurmountable economic crisis and collapse is based on a misunderstanding of Marxist economic theory. While economic crises are endemic to capitalism, the system can always continue—only class struggle, only a social revolution, can overthrow capitalism and end the crisis cycle.

Large-scale environmental damage is caused by our destructive economic system, but its effectis the potentially irreversible disruption of essential natural systems. The most dramatic example is global warming: recent research shows that the earth is now warmer than at any time in the past 6,000 years, and temperatures are rising much faster than at any time since the last Ice Age. Arctic ice and the Greenland ice sheet are disappearing faster than predicted, raising the specter of flooding in coastal areas where more than a billion people live. Extreme weather events, such as giant storms, heat waves, and droughts are becoming ever more frequent. So many species are going extinct that many scientists call it a mass extinction event, comparable to the time 66 million years ago when 75 percent of all species, including the dinosaurs, were wiped out.

As the editors of Monthly Review wrote in reply to Socialist Register, if these trends continue, “we will be faced with a different world—one in which life on the planet will be massively degraded on a scale not seen for tens of millions of years.”11 To call this “anxiety-driven ecological catastrophism, parallel to…economic catastrophism” is to equate an abstract error in economic theory with some of the strongest conclusions of modern science.

A New ‘Catastrophism’ Critique

Now a new essay, provocatively titled “The Politics of Failure Have Failed,” offers a different and more sweeping left-wing critique of “environmental catastrophism.” Author Eddie Yuen is associated with the Pacifica radio program Against the Grain, and is on the editorial board of the journal Capitalism Nature Socialism.

His paper is part of a broader effort to define and critique a body of political thought calledCatastrophism, in a book by that title.12 In the book’s introduction, Sasha Lilley offers this definition:

Catastrophism presumes that society is headed for a collapse, whether economic, ecological, social, or spiritual. This collapse is frequently, but not always, regarded as a great cleansing, out of which a new society will be born. Catastrophists tend to believe that an ever-intensified rhetoric of disaster will awaken the masses from their long slumber—if the mechanical failure of the system does not make such struggles superfluous. On the left, catastrophism veers between the expectation that the worse things become, the better they will be for radical fortunes, and the prediction that capitalism will collapse under its own weight. For parts of the right, worsening conditions are welcomed, with the hope they will trigger divine intervention or allow the settling of scores for any modicum of social advance over the last century.

A political category that includes both the right and the left—and that encompasses people whose concerns might be economic, ecological, social, or spiritual—is, to say the least, unreasonably broad. It is difficult to see any analytical value in a definition that lumps together anarchists, fascists, Christian fundamentalists, right-wing conspiracy nuts, pre–1914 socialists, peak-oil theorists, obscure Trotskyist groups, and even Mao Zedong.

The definition of catastrophism becomes even more problematic in Yuen’s essay.

One Of These Things Is Not Like The Others…

Years ago, the children’s television program Sesame Street would display four items—three circles and a square, three horses and a chair, and so on—while someone sang, “One of these things is not like the others, One of these things doesn’t belong.

I thought of that when I read Yuen’s essay.

While the book’s scope is broad, most of it focuses, as Yuen writes, on “instrumental, spurious, and sometimes maniacal versions of catastrophism—including rightwing racial paranoia, religious millenarianism, liberal panics over fascism, leftist fetishization of capitalist collapse, capitalist invocation of the ‘shock doctrine’ and pop culture cliché.”

But as Yuen admits in his first paragraph, environmentalism is a very different matter, because we are in “what is unquestionably a genuine catastrophic moment in human and planetary history…. Of all of the forms of catastrophic discourse on offer, the collapse of ecological systems is unique in that it is definitively verified by a consensus within the scientific community…. It is absolutely urgent to address this by effectively and rapidly changing the direction of human society.”

If the science is clear, if widespread ecological collapse unquestionably faces us unless action is taken, why is this topic included in a book devoted to criticizing false ideas? Does it make sense to use the same term for people who believe in an imaginary train crash and for people who are trying to stop a real crash from happening?

The answer, although he does not say so, is that Yuen is using a different definition than the one Lilley gave in her introduction. Her version used the word for the belief that some form of catastrophe will have positive results—that capitalism will collapse from internal contradictions, that God will punish all sinners, that peak oil or industrial collapse will save the planet. Yuen uses the same word for the idea that environmentalists should alert people to the threat of catastrophic environmental change and try to mobilize them to prevent or minimize it.

Thus, when he refers to “a shrill note of catastrophism” in the work of James Hansen, perhaps the world’s leading climate scientist, he is not challenging the accuracy of Hansen’s analysis, but only the “narrative strategy” of clearly stating the probable results of continuing business as usual.

Yuen insists that “the veracity of apocalyptic claims about ecological collapse are separate from their effects on social, political, and economic life.” Although “the best evidence points to cascading environmental disaster,” in his view it is self-defeating to tell people that. He makes two arguments, which we can label “practical” and “principled.”

His practical argument is that by talking about “apocalyptic scenarios” environmentalists have made people more apathetic, less likely to fight for progressive change. His principledargument is that exposing and campaigning to stop tendencies towards environmental collapse has “damaging and rightward-leaning effects”—it undermines the left, promotes reactionary policies and strengthens the ruling class.

In my opinion, he is wrong on both counts.

The Truth Shall Make You Apathetic?

In Yuen’s view, the most important question facing people who are concerned about environmental destruction is: “what narrative strategies are most likely to generate effective and radical social movements?”

He is vague about what “narrative strategies” might work, but he is very firm about what does not. He argues that environmentalists have focused on explaining the environmental crisis and warning of its consequences in the belief that this will lead people to rise up and demand change, but this is a fallacy. In reality, “once convinced of apocalyptic scenarios, many Americans become more apathetic.”

Given such a sweeping assertion, it is surprising to find that the only evidence Yuen offers is a news release describing one academic paper, based on a U.S. telephone survey conducted in 2008, that purported to show that “more informed respondents both feel less personally responsible for global warming, and also show less concern for global warming.”13

Note first that being “more informed” is not the same as being “convinced of apocalyptic scenarios” or being bombarded with “increasingly urgent appeals about fixed ecological tipping points.” On the face of it, this study does not appear to contribute to our understanding of the effects of “catastrophism.”

What’s more, reading the original paper reveals that the people described as “more informed” were self-reporting. If they said they were informed, that was accepted, and no one asked if they were listening to climate scientists or to conservative talk radio. That makes the paper’s conclusion meaningless.

Later in his essay, Yuen correctly criticizes some environmentalists and scientists who “speak of ‘everyone’ as a unified subject.” But here he accepts as credible a study that purports to show how all Americans respond to information about climate change, regardless of class, gender, race, or political leanings.

The problem with such undifferentiated claims is shown in a 2011 study that examined the impact of Americans’ political opinions on their feelings about climate change. It found that liberals and Democrats who report being well-informed are more worried about climate change, while conservatives and Republicans who report being well-informed are less worried.14 Obviously the two groups mean very different things by “well-informed.”

Even if we ignore that, the study Yuen cites is a one-time snapshot—it does not tells us what radicals really need to know, which is how things are changing. For that, a more useful survey is one that scientists at Yale University and George Mason University have conducted seven times since 2008 to show shifts in U.S. public opinion.15 Based on answers to questions about their opinions, respondents are categorized according to their attitude towards global warming. The surveys show:

  • The number of people identified as “Disengaged” or “Cautious”—those we might call apathetic or uncertain—has varied very little, accounting for between 31 percent and 35 percent of the respondents every time.
  • The categories “Dismissive” or “Doubtful”—those who lean towards denial—increased between 2008 and 2010. Since then, those groups have shrunk back almost to the 2008 level.
  • In parallel, the combined “Concerned” and “Alarmed” groups shrank between 2008 and 2010, but have since largely recovered. In September 2012—before Hurricane Sandy!—there were more than twice as many Americans in these two categories as in Dismissive/Doubtful.

Another study, published in the journal Climatic Change, used seventy-four independent surveys conducted between 2002 and 2011 to create a Climate Change Threat Index (CCTI)—a measure of public concern about climate change—and showed how it changed in response to public events. It found that public concern about climate change reached an all-time high in 2006–2007, when the Al Gore documentary An Inconvenient Truth was seen in theaters by millions of people and won an Academy Award.

The authors conclude: “Our results…show that advocacy efforts produce substantial changes in public perceptions related to climate change. Specifically, the film An Inconvenient Truth and the publicity surrounding its release produced a significant positive jump in the CCTI.”16

This directly contradicts Yuen’s view that more information about climate change causes Americans to become more apathetic. There is no evidence of a long-term increase in apathy or decrease in concern—and when scientific information about climate change reached millions of people, the result was not apathy but a substantial increase in support for action to reduce greenhouse gas emissions.

‘The Two Greatest Myths’

Yuen says environmentalists have deluged Americans with catastrophic warnings, and this strategy has produced apathy, not action. Writing of establishment politicians who make exactly the same claim, noted climate change analyst Joseph Romm says, “The two greatest myths about global warming communications are 1) constant repetition of doomsday messages has been a major, ongoing strategy and 2) that strategy doesn’t work and indeed is actually counterproductive!” Contrary to liberal mythology, the North American public has not been exposed to anything even resembling the first claim. Romm writes,

The broad American public is exposed to virtually no doomsday messages, let alone constant ones, on climate change in popular culture (TV and the movies and even online)…. The major energy companies bombard the airwaves with millions and millions of dollars of repetitious pro-fossil-fuel ads. The environmentalists spend far, far less money…. Environmentalists when they do appear in popular culture, especially TV, are routinely mocked…. It is total BS that somehow the American public has been scared and overwhelmed by repeated doomsday messaging into some sort of climate fatigue.17

The website Daily Climate, which tracks U.S. news stories about climate change, says coverage peaked in 2009, during the Copenhagen talks—but then it “fell off the map,” dropping 30 percent in 2010 and another 20 percent in 2011. In 2012, despite widespread droughts and Hurricane Sandy, news coverage fell another 2 percent. The decline in editorial interest was even more dramatic—in 2012 newspapers published fewer than half as many editorials about climate change as they did in 2009.18

It should be noted that these shifts occurred in the framework of very limited news coverage of climate issues. As a leading media analyst notes, “relative to other issues like health, medicine, business, crime and government, media attention to climate change remains a mere blip.”19 Similarly, a British study describes coverage of climate change in newspapers there as “lamentably thin”—a problem exacerbated by the fact that much of the coverage consists of “worryingly persistent climate denial stories.” The author concludes drily: “The limited coverage is unlikely to have convinced readers that climate change is a serious problem warranting immediate, decisive and potentially costly action.”20

Given Yuen’s concern that Americans do not recognize the seriousness of environmental crises, it is surprising how little he says about the massive fossil-fuel-funded disinformation campaigns that have confused and distorted media reporting. I can find just four sentences on the subject in his 9,000-word text, and not one that suggests denialist campaigns might have helped undermine efforts to build a climate change movement.

On the contrary, he downplays the influence of “the well-funded climate denial lobby,” by claiming that “far more corporate and elite energy has gone toward generating anxiety about global warming,” and that “mainstream climate science is much better funded.” He provides no evidence for either statement.

Of course, the fossil-fuel lobby is not the only force working to undermine public concern about climate change. It is also important to recognize the impact of Obama’s predictable unwillingness to confront the dominant forces in U.S. capitalism, and of the craven failure of mainstream environmentalist groups and NGOs to expose and challenge the Democrats’ anti-environmental policies.

With fossil-fuel denialists on one side, and Obama’s pale-green cheerleaders on the other, activists who want to get out the truth have barely been heard. In that context, it makes little sense to blame environmentalists for sabotaging environmentalism.

The Truth Will Help the Right?

Halfway through his essay, Yuen abruptly changes direction, leaving the practical argument behind and raising his principled concern. He now argues that what he calls catastrophism leads people to support reactionary policies and promotes “the most authoritarian solutions at the state level.” Focusing attention on what he agrees is a “cascading environmental disaster” is dangerous because it “disables the left but benefits the right and capital.” He says, “Increased awareness of environmental crisis will not likely translate into a more ecological lifestyle, let alone an activist orientation against the root causes of environmental degradation. In fact, right-wing and nationalist environmental politics have much more to gain from an embrace of catastrophism.”

Yuen says that many environmentalists, including scientists, “reflexively overlook class divisions,” and so do not realize that “some business and political elites feel that they can avoid the worst consequences of the environmental crisis, and may even be able to benefit from it.” Yuen apparently thinks those elites are right—while the insurance industry is understandably worried about big claims, he says, “the opportunities for other sectors of capitalism are colossal in scope.”

He devotes much of the rest of his essay to describing the efforts of pro-capitalist forces, conservative and liberal, to use concern about potential environmental disasters to promote their own interests, ranging from emissions trading schemes to military expansion to Malthusian attacks on the world’s poorest people. “The solution offered by global elites to the catastrophe is a further program of austerity, belt-tightening, and sacrifice, the brunt of which will be borne by the world’s poor.”

Some of this is overstated. His claim that “Malthusianism is at the core of most environmental discourse,” reflects either a very limited view of environmentalism or an excessively broad definition of Malthusianism. And he seems to endorse David Noble’s bizarre theory that public concern about global warming has been engineered by a corporate conspiracy to promote carbon trading schemes.21 Nevertheless he is correct that the ruling class will do its best to profit from concern about climate change, while simultaneously offloading the costs onto the world’s poorest people.

The question is, who is he arguing with? This book says it aims to “spur debate among radicals,” but none of this is new or controversial for radicals. The insight that the interests of the ruling class are usually opposed to the interests of the rest of us has been central to left-wing thought since before Marx was born. Capitalists always try to turn crises to their advantage no matter who gets hurt, and they always try to offload the costs of their crises onto the poor and oppressed.

What needs to be proved is not that pro-capitalist forces are trying to steer the environmental movement into profitable channels, and not that many sincere environmentalists have backward ideas about the social and economic causes of ecological crises. Radicals who are active in green movements know those things perfectly well. What needs to be proved is Yuen’s view that warning about environmental disasters and campaigning to prevent them has “damaging and rightward-leaning effects” that are so severe that radicals cannot overcome them.

But no proof is offered.

What is particularly disturbing about his argument is that he devotes pages to describing the efforts of reactionaries to misdirect concern about climate change—and none to the efforts of radical environmentalists to counter those forces. Earlier in his essay, he mentioned that “environmental and climate justice perspectives are steadily gaining traction in internal environmental debates,” but those thirteen words are all he has to say on the subject.

He says nothing about the historic 2010 Cochabamba Conference, where 30,000 environmental activists from 140 countries warned that if greenhouse gas emissions are not stopped, “the damages caused to our Mother Earth will be completely irreversible”—a statement Yuen would doubtless label “catastrophist.” Far from succumbing to apathy or reactionary policies, the participants explicitly rejected market solutions, identified capitalism as the cause of the crisis, and outlined a radical program to transform the global economy.

He is equally silent about the campaign against the fraudulent “green economy” plan adopted at last year’s Rio+20 conference. One of the principal organizers of that opposition is La Via Campesina, the world’s largest organization of peasants and farmers, which warns that the world’s governments are “propagating the same capitalist model that caused climate chaos and other deep social and environmental crises.”

His essay contains not a word about Idle No More, or Occupy, or the Indigenous-led fight against Canada’s tar sands, or the anti-fracking and anti-coal movements. By omitting them, Yuen leaves the false impression that the climate movement is helpless to resist reactionary forces.

Contrary to Yuen’s title, the effort to build a movement to save the planet has not failed. Indeed, Catastrophism was published just four months before the largest U.S. climate change demonstration ever!

The question before radicals is not what “narrative strategy” to adopt, but rather, how will we relate to the growing environmental movement? How will we support its goals while strengthening the forces that see the need for more radical solutions?

What Must Be Done?

Yuen opposes attempts to build a movement around rallies, marches, and other mass protests to get out the truth and to demand action against environmental destruction. He says that strategy worked in the 1960s, when Americans were well-off and naïve, but cannot be replicated in today’s “culture of atomized cynicism.”

Like many who know that decade only from history books or as distant memories, Yuen foreshortens the experience: he knows about the mass protests and dissent late in the decade, but ignores the many years of educational work and slow movement building in a deeply reactionary and racist time. It is not predetermined that the campaign against climate change will take as long as those struggles, or take similar forms, but the real experience of the 1960s should at least be a warning against premature declarations of failure.

Yuen is much less explicit about what he thinks would be an effective strategy, but he cites as positive examples the efforts of some to promote “a bottom-up and egalitarian transition” by:

ever-increasing numbers of people who are voluntarily engaging in intentional communities, sustainability projects, permaculture and urban farming, communing and mil­itant resistance to consumerism…we must consider the alterna­tive posed by the highly imaginative Italian left of the twentieth century. The explosively popular Slow Food movement was origi­nally built on the premise that a good life can be had not through compulsive excess but through greater conviviality and a shared commonwealth.

Compare that to this list of essential tasks, prepared recently by Pablo Solón, a leading figure in the global climate justice movement:

To reduce greenhouse gas emissions to a level that avoids catastrophe, we need to:

* Leave more than two-thirds of the fossil fuel reserves under the soil;

* Stop the exploitation of tar sands, shale gas and coal;

* Support small, local, peasant and indigenous community farming while we dismantle big agribusiness that deforests and heats the planet;

* Promote local production and consumption of products, reducing the free trade of goods that send millions of tons of CO2 while they travel around the world;

* Stop extractive industries from further destroying nature and contaminating our atmosphere and our land;

* Increase significantly public transport to reduce the unsustainable “car way of life”;

* Reduce the emissions of warfare by promoting genuine peace and dismantling the military and war industry and infrastructure.22

The projects that Yuen describes are worthwhile, but unless the participants are alsocommitted to building mass environmental campaigns, they will not be helping to achieve the vital objectives that Solón identifies. Posing local communes and slow food as alternatives to building a movement against global climate change is effectively a proposal to abandon the fight against capitalist ecocide in favor of creating greenish enclaves, while the world burns.

Bright-siding versus Movement Building

Whatever its merits in other contexts, it is not helpful or appropriate to use the wordcatastrophism as a synonym for telling the truth about the environmental dangers we face. Using the same language as right-wing climate science deniers gives the impression that the dangers are non-existent or exaggerated. Putting accurate environmental warnings in the same category as apocalyptic Christian fundamentalism and century-old misreadings of Marxist economic theory leads to underestimation of the threats we face and directs efforts away from mobilizing an effective counterforce.

Yuen’s argument against publicizing the scientific consensus on climate change echoes the myth that liberal politicians and journalists use to justify their failure to challenge the crimes of the fossil-fuel industry. People are tired of all that doom and gloom, they say. It is time for positivemessages! Or, to use Yuen’s vocabulary, environmentalists need to end “apocalyptic rhetoric” and find better “narrative strategies.”

This is fundamentally an elitist position: the people cannot handle the truth, so a knowledgeable minority must sugarcoat it, to make the necessary changes palatable.

David Spratt of the Australian organization Climate Code Red calls that approach “bright-siding,” a reference to the bitterly satirical Monty Python song, “Always Look on the Bright Side of Life.”

The problem is, Spratt writes: “If you avoid including an honest assessment of climate science and impacts in your narrative, it’s pretty difficult to give people a grasp about where the climate system is heading and what needs to be done to create the conditions for living in climate safety, rather than increasing and eventually catastrophic harm.”23 Joe Romm makes the same point: “You’d think it would be pretty obvious that the public is not going to be concerned about an issue unless one explains why they should be concerned.”24

Of course, this does not mean that we only need to explain the science. We need to propose concrete goals, as Pablo Solón has done. We need to show how the scientific consensus about climate change relates to local and national concerns such as pipelines, tar sands, fracking, and extreme weather. We need to work with everyone who is willing to confront any aspect of the crisis, from people who still have illusions about capitalism to convinced revolutionaries. Activists in the wealthy countries must be unstinting in their political and practical solidarity with the primary victims of climate change, indigenous peoples, and impoverished masses everywhere.

We need to do all of that and more.

But the first step is to tell the truth—about the danger we face, about its causes, and about the measures that must be taken to turn back the threat. In a time of universal deceit, telling the truth is a revolutionary act.

Notes

  1.  Fraser C. Lott, Nikolaos Christidis, and Peter A. Stott, “Can the 2011 East African Drought Be Attributed to Human-Induced Climate Change?,” Geophysical Research Letters 40, no. 6 ( March 2013): 1177–81.
  2.  UNDP, “’Rise of South’ Transforming Global Power Balance, Says 2013 Human Development Report,” March 14, 2013, http://undp.org.
  3.  Tom Harris, “Climate Catastrophism Picking Up Again in the U.S. and Across the World,”Somewhat Reasonable, October 10, 2012 http://blog.heartland.org.
  4.  Pierre Gosselin, “The Climate Catastrophism Cult,” NoTricksZone, February 12, 2011, http://notrickszone.com.
  5.  Ray Evans, “The Chilling Costs of Climate Catastrophism,” Quadrant Online, June 2008. http://quadrant.org.au.
  6.  Franz Mauelshagen, “Climate Catastrophism: The History of the Future of Climate Change,” in Andrea Janku, Gerrit Schenk, and Franz Mauelshagen, Historical Disasters in Context: Science, Religion, and Politics (New York: Routledge, 2012), 276.
  7.  Alexander Cockburn, “Is Global Warming a Sin?,” CounterPunch, April 28–30, 2007, http://counterpunch.org.
  8.  Alexander Cockburn, “Who are the Merchants of Fear?,” CounterPunch, May 12–14, 2007, http:// counterpunch.org.
  9.  Alexander Cockburn, “I Am An Intellectual Blasphemer,” Spiked Review of Books, January 9, 2008, http://spiked-online.com.
  10.  Leo Panitch and Colin Leys, “Preface,” Socialist Register 2007: Coming to Terms With Nature(London: Merlin Press/Monthly Review Press, 2006), ix–x.
  11. 11.“Notes from the Editors,” Monthly Review 58, no. 10 (March 2007), http://monthlyreview.org.
  12.  Sasha Lilley, David McNally, Eddie Yuen, and James Davis, Catastrophism: The Apocalyptic Politics of Collapse and Rebirth (Oakland: PM Press, 2012).
  13.  Yuen’s footnote cites an article which is identical to a news release issued the previous day by Texas A&M University; see “Increased Knowledge About Global Warming Leads to Apathy, Study Shows,” Science Daily, March 28, 2008, http://eurekalert.org. The original paper, which Yuen does not cite, is: P.M. Kellstedt, S. Zahran, and A. Vedlitz, “Personal Efficacy, the Information Environment, and Attitudes Towards Global Warming and Climate Change in the United State,”Risk Analysis 28, no. 1 (2008): 113–26.
  14.  Aaron M. McCright and Riley E. Dunlap, “The Politicization of Climate Change and Polarization in the American Public’s Views of Global Warming, 2001–2010,” The Sociological Quarterly 52 (2011): 155–94.
  15.  A. Leiserowitz, et. al., Global Warming’s Six Americas, September 2012 (New Haven, CT: Yale Project on Climate Change Communication, 2013), http://environment.yale.edu.
  16.  Robert J. Brulle, Jason Carmichael, and J. Craig Jenkins, “Shifting Public Opinion on Climate Change: An Empirical Assessment of Factors Influencing Concern Over Climate Change in the U.S., 2002–2010,” Climatic Change 114, no. 2 (September 2012): 169–88.
  17.  Joe Romm, “Apocalypse Not: The Oscars, The Media and the Myth of ‘Constant Repetition of Doomsday Messages’ on Climate,” Climate Progress, February 24, 2013, http://thinkprogress.org.
  18.  Douglas Fischer. “2010 in Review: The Year Climate Coverage ‘Fell off the Map,’” Daily Climate, January 3, 2011. http://dailyclimate.org; “Climate Coverage Down Again in 2011,” Daily Climate, January 3, 2012, http://dailyclimate.org; “Climate Coverage, Dominated by Weird Weather, Falls Further in 2012,” Daily Climate, January 2, 2013, http://dailyclimate.org.
  19.  Maxwell T. Boykoff, Who Speaks for the Climate?: Making Sense of Media Reporting on Climate Change (Cambridge: Cambridge University Press, 2011), 24.
  20.  Neil T. Gavin, “Addressing Climate Change: A Media Perspective,” Environmental Politics 18, no. 5 (September 2009): 765–80.
  21.  Two responses to David Noble are: Derrick O’Keefe, “Denying Time and Place in the Global Warming Debate,” Climate & Capitalism, June 7, 2007, http://climateandcapitalism.com; Justin Podur, “Global Warming Suspicions and Confusions,” ZNet, May 11, 2007, http://zcommunications.org.
  22.  Pablo Solón, “A Contribution to the Climate Space 2013: How to Overcome the Climate Crisis?,”Climate Space, March 14, 2013, http://climatespace2013.wordpress.com.
  23.  David Spratt, Always Look on the Bright Side of Life: Bright-siding Climate Advocacy and Its Consequences, April 2012, http://climatecodered.org.
  24.  Joe Romm, “Apocalypse Not.”

Ian Angus is editor of the online journal Climate & Capitalism. He is co-author of Too Many People? Population, Immigration, and the Environmental Crisis(Haymarket, 2011), and editor of The Global Fight for Climate Justice(Fernwood, 2010).
He would like to thank Simon Butler, Martin Empson, John Bellamy Foster, John Riddell, Javier Sethness, and Chris Williams for comments and suggestions.

Rising Seas (Nat Geo)

Picture of Seaside Heights, New Jersey, after Hurricane Sandy

As the planet warms, the sea rises. Coastlines flood. What will we protect? What will we abandon? How will we face the danger of rising seas?

By Tim Folger

Photographs by George Steinmetz

September 2013

By the time Hurricane Sandy veered toward the Northeast coast of the United States last October 29, it had mauled several countries in the Caribbean and left dozens dead. Faced with the largest storm ever spawned over the Atlantic, New York and other cities ordered mandatory evacuations of low-lying areas. Not everyone complied. Those who chose to ride out Sandy got a preview of the future, in which a warmer world will lead to inexorably rising seas.

Brandon d’Leo, a 43-year-old sculptor and surfer, lives on the Rockaway Peninsula, a narrow, densely populated, 11-mile-long sandy strip that juts from the western end of Long Island. Like many of his neighbors, d’Leo had remained at home through Hurricane Irene the year before. “When they told us the tidal surge from this storm would be worse, I wasn’t afraid,” he says. That would soon change.

D’Leo rents a second-floor apartment in a three-story house across the street from the beach on the peninsula’s southern shore. At about 3:30 in the afternoon he went outside. Waves were crashing against the five-and-a-half-mile-long boardwalk. “Water had already begun to breach the boardwalk,” he says. “I thought, Wow, we still have four and a half hours until high tide. In ten minutes the water probably came ten feet closer to the street.”

Back in his apartment, d’Leo and a neighbor, Davina Grincevicius, watched the sea as wind-driven rain pelted the sliding glass door of his living room. His landlord, fearing the house might flood, had shut off the electricity. As darkness fell, Grincevicius saw something alarming. “I think the boardwalk just moved,” she said. Within minutes another surge of water lifted the boardwalk again. It began to snap apart.

Three large sections of the boardwalk smashed against two pine trees in front of d’Leo’s apartment. The street had become a four-foot-deep river, as wave after wave poured water onto the peninsula. Cars began to float in the churning water, their wailing alarms adding to the cacophony of wind, rushing water, and cracking wood. A bobbing red Mini Cooper, its headlights flashing, became wedged against one of the pine trees in the front yard. To the west the sky lit up with what looked like fireworks—electrical transformers were exploding in Breezy Point, a neighborhood near the tip of the peninsula. More than one hundred homes there burned to the ground that night.

The trees in the front yard saved d’Leo’s house, and maybe the lives of everyone inside—d’Leo, Grincevicius, and two elderly women who lived in an apartment downstairs. “There was no option to get out,” d’Leo says. “I have six surfboards in my apartment, and I was thinking, if anything comes through the wall, I’ll try to get everyone on those boards and try to get up the block. But if we’d had to get in that water, it wouldn’t have been good.”

After a fitful night’s sleep d’Leo went outside shortly before sunrise. The water had receded, but thigh-deep pools still filled parts of some streets. “Everything was covered with sand,” he says. “It looked like another planet.”

A profoundly altered planet is what our fossil-fuel-driven civilization is creating, a planet where Sandy-scale flooding will become more common and more destructive for the world’s coastal cities. By releasing carbon dioxide and other heat-trapping gases into the atmosphere, we have warmed the Earth by more than a full degree Fahrenheit over the past century and raised sea level by about eight inches. Even if we stopped burning all fossil fuels tomorrow, the existing greenhouse gases would continue to warm the Earth for centuries. We have irreversibly committed future generations to a hotter world and rising seas.

In May the concentration of carbon dioxide in the atmosphere reached 400 parts per million, the highest since three million years ago. Sea levels then may have been as much as 65 feet above today’s; the Northern Hemisphere was largely ice free year-round. It would take centuries for the oceans to reach such catastrophic heights again, and much depends on whether we manage to limit future greenhouse gas emissions. In the short term scientists are still uncertain about how fast and how high seas will rise. Estimates have repeatedly been too conservative.

Global warming affects sea level in two ways. About a third of its current rise comes from thermal expansion—from the fact that water grows in volume as it warms. The rest comes from the melting of ice on land. So far it’s been mostly mountain glaciers, but the big concern for the future is the giant ice sheets in Greenland and Antarctica. Six years ago the Intergovernmental Panel on Climate Change (IPCC) issued a report predicting a maximum of 23 inches of sea-level rise by the end of this century. But that report intentionally omitted the possibility that the ice sheets might flow more rapidly into the sea, on the grounds that the physics of that process was poorly understood.

As the IPCC prepares to issue a new report this fall, in which the sea-level forecast is expected to be slightly higher, gaps in ice-sheet science remain. But climate scientists now estimate that Greenland and Antarctica combined have lost on average about 50 cubic miles of ice each year since 1992—roughly 200 billion metric tons of ice annually. Many think sea level will be at least three feet higher than today by 2100. Even that figure might be too low.

“In the last several years we’ve observed accelerated melting of the ice sheets in Greenland and West Antarctica,” says Radley Horton, a research scientist at Columbia University’s Earth Institute in New York City. “The concern is that if the acceleration continues, by the time we get to the end of the 21st century, we could see sea-level rise of as much as six feet globally instead of two to three feet.” Last year an expert panel convened by the National Oceanic and Atmospheric Administration adopted 6.6 feet (two meters) as its highest of four scenarios for 2100. The U.S. Army Corps of Engineers recommends that planners consider a high scenario of five feet.

One of the biggest wild cards in all sea-level-rise scenarios is the massive Thwaites Glacier in West Antarctica. Four years ago NASA sponsored a series of flights over the region that used ice-penetrating radar to map the seafloor topography. The flights revealed that a 2,000-foot-high undersea ridge holds the Thwaites Glacier in place, slowing its slide into the sea. A rising sea could allow more water to seep between ridge and glacier and eventually unmoor it. But no one knows when or if that will happen.“That’s one place I’m really nervous about,” says Richard Alley, a glaciologist at Penn State University and an author of the last IPCC report. “It involves the physics of ice fracture that we really don’t understand.” If the Thwaites Glacier breaks free from its rocky berth, that would liberate enough ice to raise sea level by three meters—nearly ten feet. “The odds are in our favor that it won’t put three meters in the ocean in the next century,” says Alley. “But we can’t absolutely guarantee that. There’s at least some chance that something very nasty will happen.”

Even in the absence of something very nasty, coastal cities face a twofold threat: Inexorably rising oceans will gradually inundate low-lying areas, and higher seas will extend the ruinous reach of storm surges. The threat will never go away; it will only worsen. By the end of the century a hundred-year storm surge like Sandy’s might occur every decade or less. Using a conservative prediction of a half meter (20 inches) of sea-level rise, the Organisation for Economic Co-operation and Development estimates that by 2070, 150 million people in the world’s large port cities will be at risk from coastal flooding, along with $35 trillion worth of property—an amount that will equal 9 percent of the global GDP. How will they cope?

“During the last ice age there was a mile or two of ice above us right here,” says Malcolm Bowman, as we pull into his driveway in Stony Brook, New York, on Long Island’s north shore. “When the ice retreated, it left a heap of sand, which is Long Island. All these rounded stones you see—look there,” he says, pointing to some large boulders scattered among the trees near his home. “They’re glacial boulders.”

Bowman, a physical oceanographer at the State University of New York at Stony Brook, has been trying for years to persuade anyone who will listen that New York City needs a harbor-spanning storm-surge barrier. Compared with some other leading ports, New York is essentially defenseless in the face of hurricanes and floods. London, Rotterdam, St. Petersburg, New Orleans, and Shanghai have all built levees and storm barriers in the past few decades. New York paid a high price for its vulnerability last October. Sandy left 43 dead in the city, of whom 35 drowned; it cost the city some $19 billion. And it was all unnecessary, says Bowman.

“If a system of properly designed storm-surge barriers had been built—and strengthened with sand dunes at both ends along the low-lying coastal areas—there would have been no flooding damage from Sandy,” he says.

Bowman envisions two barriers: one at Throgs Neck, to keep surges from Long Island Sound out of the East River, and a second one spanning the harbor south of the city. Gates would accommodate ships and tides, closing only during storms, much like existing structures in the Netherlands and elsewhere. The southern barrier alone, stretching five miles between Sandy Hook, New Jersey, and the Rockaway Peninsula, might cost $10 billion to $15 billion, Bowman estimates. He pictures a six-lane toll highway on top that would provide a bypass route around the city and a light-rail line connecting the Newark and John F. Kennedy Airports.

“It could be an asset to the region,” says Bowman. “Eventually the city will have to face up to this, because the problem is going to get worse. It might take five years of study and another ten years to get the political will to do it. By then there might have been another disaster. We need to start planning immediately. Otherwise we’re mortgaging the future and leaving the next generation to cope as best it can.”

Another way to safeguard New York might be to revive a bit of its past. In the 16th-floor loft of her landscape architectural firm in lower Manhattan, Kate Orff pulls out a map of New York Harbor in the 19th century. The present-day harbor shimmers outside her window, calm and unthreatening on an unseasonably mild morning three months to the day after Sandy hit.

“Here’s an archipelago that protected Red Hook,” Orff says, pointing on the map to a small cluster of islands off the Brooklyn shore. “There was another chain of shoals that connected Sandy Hook to Coney Island.”

The islands and shallows vanished long ago, demolished by harbor-dredging and landfill projects that added new real estate to a burgeoning city. Orff would re-create some of them, particularly the Sandy Hook–Coney Island chain, and connect them with sluice gates that would close during a storm, forming an eco-engineered barrier that would cross the same waters as Bowman’s more conventional one. Behind it, throughout the harbor, would be dozens of artificial reefs built from stone, rope, and wood pilings and seeded with oysters and other shellfish. The reefs would continue to grow as sea levels rose, helping to buffer storm waves—and the shellfish, being filter feeders, would also help clean the harbor. “Twenty-five percent of New York Harbor used to be oyster beds,” Orff says.

Orff estimates her “oystertecture” vision could be brought to life at relatively low cost. “It would be chump change compared with a conventional barrier. And it wouldn’t be money wasted: Even if another Sandy never happens, you’d have a cleaner, restored harbor in a more ecologically vibrant context and a healthier New York.”

In June, Mayor Michael Bloomberg outlined a $19.5 billion plan to defend New York City against rising seas. “Sandy was a temporary setback that can ultimately propel us forward,” he said. The mayor’s proposal calls for the construction of levees, local storm-surge barriers, sand dunes, oyster reefs, and more than 200 other measures. It goes far beyond anything planned by any other American city. But the mayor dismissed the idea of a harbor barrier. “A giant barrier across our harbor is neither practical nor affordable,” Bloomberg said. The plan notes that since a barrier would remain open most of the time, it would not protect the city from the inch-by-inch creep of sea-level rise.

Meanwhile, development in the city’s flood zones continues. Klaus Jacob, a geophysicist at Columbia University, says the entire New York metropolitan region urgently needs a master plan to ensure that future construction will at least not exacerbate the hazards from rising seas.

“The problem is we’re still building the city of the past,” says Jacob. “The people of the 1880s couldn’t build a city for the year 2000—of course not. And we cannot build a year-2100 city now. But we should not build a city now that we know will not function in 2100. There are opportunities to renew our infrastructure. It’s not all bad news. We just have to grasp those opportunities.”

Will New York grasp them after Bloomberg leaves office at the end of this year? And can a single storm change not just a city’s but a nation’s policy? It has happened before. The Netherlands had its own stormy reckoning 60 years ago, and it transformed the country.

The storm roared in from the North Sea on the night of January 31, 1953. Ria Geluk was six years old at the time and living where she lives today, on the island of Schouwen Duiveland in the southern province of Zeeland. She remembers a neighbor knocking on the door of her parents’ farmhouse in the middle of the night to tell them that the dike had failed. Later that day the whole family, along with several neighbors who had spent the night, climbed to the roof, where they huddled in blankets and heavy coats in the wind and rain. Geluk’s grandparents lived just across the road, but water swept into the village with such force that they were trapped in their home. They died when it collapsed.

“Our house kept standing,” says Geluk. “The next afternoon the tide came again. My father could see around us what was happening; he could see houses disappearing. You knew when a house disappeared, the people were killed. In the afternoon a fishing boat came to rescue us.”

In 1997 Geluk helped found the Watersnoodmuseum—the “flood museum”—on Schouwen Duiveland. The museum is housed in four concrete caissons that engineers used to plug dikes in 1953. The disaster killed 1,836 in all, nearly half in Zeeland, including a baby born on the night of the storm.

Afterward the Dutch launched an ambitious program of dike and barrier construction called the Delta Works, which lasted more than four decades and cost more than six billion dollars. One crucial project was the five-mile-long Oosterscheldekering, or Eastern Scheldt barrier, completed 27 years ago to defend Zeeland from the sea. Geluk points to it as we stand on a bank of the Scheldt estuary near the museum, its enormous pylons just visible on the horizon. The final component of the Delta Works, a movable barrier protecting Rotterdam Harbor and some 1.5 million people, was finished in 1997.

Like other primary sea barriers in the Netherlands, it’s built to withstand a 1-in-10,000-year storm—the strictest standard in the world. (The United States uses a 1-in-100 standard.) The Dutch government is now considering whether to upgrade the protection levels to bring them in line with sea-level-rise projections.

Such measures are a matter of national security for a country where 26 percent of the land lies below sea level. With more than 10,000 miles of dikes, the Netherlands is fortified to such an extent that hardly anyone thinks about the threat from the sea, largely because much of the protection is so well integrated into the landscape that it’s nearly invisible.

On a bitingly cold February afternoon I spend a couple of hours walking around Rotterdam with Arnoud Molenaar, the manager of the city’s Climate Proof program, which aims to make Rotterdam resistant to the sea levels expected by 2025. About 20 minutes into our walk we climb a sloping street next to a museum designed by the architect Rem Koolhaas. The presence of a hill in this flat city should have alerted me, but I’m surprised when Molenaar tells me that we’re walking up the side of a dike. He gestures to some nearby pedestrians. “Most of the people around us don’t realize this is a dike either,” he says. The Westzeedijk shields the inner city from the Meuse River a few blocks to the south, but the broad, busy boulevard on top of it looks like any other Dutch thoroughfare, with flocks of cyclists wheeling along in dedicated lanes.

As we walk, Molenaar points out assorted subtle flood-control structures: an underground parking garage designed to hold 10,000 cubic meters—more than 2.5 million gallons—of rainwater; a street flanked by two levels of sidewalks, with the lower one designed to store water, leaving the upper walkway dry. Late in the afternoon we arrive at Rotterdam’s Floating Pavilion, a group of three connected, transparent domes on a platform in a harbor off the Meuse. The domes, about three stories tall, are made of a plastic that’s a hundred times as light as glass.

Inside we have sweeping views of Rotterdam’s skyline; hail clatters overhead as low clouds scud in from the North Sea. Though the domes are used for meetings and exhibitions, their main purpose is to demonstrate the wide potential of floating urban architecture. By 2040 the city anticipates that as many as 1,200 homes will float in the harbor. “We think these structures will be important not just for Rotterdam but for many cities around the world,” says Bart Roeffen, the architect who designed the pavilion. The homes of 2040 will not necessarily be domes; Roeffen chose that shape for its structural integrity and its futuristic appeal. “To build on water is not new, but to develop floating communities on a large scale and in a harbor with tides—that is new,” says Molenaar. “Instead of fighting against water, we want to live with it.”

While visiting the Netherlands, I heard one joke repeatedly: “God may have built the world, but the Dutch built Holland.” The country has been reclaiming land from the sea for nearly a thousand years—much of Zeeland was built that way. Sea-level rise does not yet panic the Dutch.

“We cannot retreat! Where could we go? Germany?” Jan Mulder has to shout over the wind—we’re walking along a beach called Kijkduin as volleys of sleet exfoliate our faces. Mulder is a coastal morphologist with Deltares, a private coastal management firm. This morning he and Douwe Sikkema, a project manager with the province of South Holland, have brought me to see the latest in adaptive beach protection. It’s called the zandmotor—the sand engine.

The seafloor offshore, they explain, is thick with hundreds of feet of sand deposited by rivers and retreating glaciers. North Sea waves and currents once distributed that sand along the coast. But as sea level has risen since the Ice Age, the waves no longer reach deep enough to stir up sand, and the currents have less sand to spread around. Instead the sea erodes the coast here.

The typical solution would be to dredge sand offshore and dump it directly on the eroding beaches—and then repeat the process year after year as the sand washes away. Mulder and his colleagues recommended that the provincial government try a different strategy: a single gargantuan dredging operation to create the sandy peninsula we’re walking on—a hook-shaped stretch of beach the size of 250 football fields. If the scheme works, over the next 20 years the wind, waves, and tides will spread its sand 15 miles up and down the coast. The combination of wind, waves, tides, and sand is the zandmotor.

The project started only two years ago, but it seems to be working. Mulder shows me small dunes that have started to grow on a beach where there was once open water. “It’s very flexible,” he says. “If we see that sea-level rise increases, we can increase the amount of sand.” Sikkema adds, “And it’s much easier to adjust the amount of sand than to rebuild an entire system of dikes.”

Later Mulder tells me about a memorial inscription affixed to the Eastern Scheldt barrier in Zeeland: “It says, ‘Hier gaan over het tij, de maan, de wind, en wij—Here the tide is ruled by the moon, the wind, and us.’ ” It reflects the confidence of a generation that took for granted, as we no longer can, a reasonably stable world. “We have to understand that we are not ruling the world,” says Mulder. “We need to adapt.”

With the threats of climate change and sea-level rise looming over us all, cities around the world, from New York to Ho Chi Minh City, have turned to the Netherlands for guidance. One Dutch firm, Arcadis, has prepared a conceptual design for a storm-surge barrier in the Verrazano Narrows to protect New York City. The same company helped design a $1.1 billion, two-mile-long barrier that protected New Orleans from a 13.6-foot storm surge last summer, when Hurricane Isaac hit. The Lower Ninth Ward, which suffered so greatly during Hurricane Katrina, was unscathed.

“Isaac was a tremendous victory for New Orleans,” Piet Dircke, an Arcadis executive, tells me one night over dinner in Rotterdam. “All the barriers were closed; all the levees held; all the pumps worked. You didn’t hear about it? No, because nothing happened.”

New Orleans may be safe for a few decades, but the long-term prospects for it and other low-lying cities look dire. Among the most vulnerable is Miami. “I cannot envision southeastern Florida having many people at the end of this century,” says Hal Wanless, chairman of the department of geological sciences at the University of Miami. We’re sitting in his basement office, looking at maps of Florida on his computer. At each click of the mouse, the years pass, the ocean rises, and the peninsula shrinks. Freshwater wetlands and mangrove swamps collapse—a death spiral that has already started on the southern tip of the peninsula. With seas four feet higher than they are today—a distinct possibility by 2100—about two-thirds of southeastern Florida is inundated. The Florida Keys have almost vanished. Miami is an island.

When I ask Wanless if barriers might save Miami, at least in the short term, he leaves his office for a moment. When he returns, he’s holding a foot-long cylindrical limestone core. It looks like a tube of gray, petrified Swiss cheese. “Try to plug this up,” he says. Miami and most of Florida sit atop a foundation of highly porous limestone. The limestone consists of the remains of countless marine creatures deposited more than 65 million years ago, when a warm, shallow sea covered what is now Florida—a past that may resemble the future here.

A barrier would be pointless, Wanless says, because water would just flow through the limestone beneath it. “No doubt there will be some dramatic engineering feats attempted,” he says. “But the limestone is so porous that even massive pumping systems won’t be able to keep the water out.”

Sea-level rise has already begun to threaten Florida’s freshwater supply. About a quarter of the state’s 19 million residents depend on wells sunk into the enormous Biscayne aquifer. Salt water is now seeping into it from dozens of canals that were built to drain the Everglades. For decades the state has tried to control the saltwater influx by building dams and pumping stations on the drainage canals. These “salinity-control structures” maintain a wall of fresh water behind them to block the underground intrusion of salt water. To offset the greater density of salt water, the freshwater level in the control structures is generally kept about two feet higher than the encroaching sea.

But the control structures also serve a second function: During the state’s frequent rainstorms their gates must open to discharge the flood of fresh water to the sea.“We have about 30 salinity-control structures in South Florida,” says Jayantha Obeysekera, the chief hydrological modeler at the South Florida Water Management District. “At times now the water level in the sea is higher than the freshwater level in the canal.” That both accelerates saltwater intrusion and prevents the discharge of flood waters. “The concern is that this will get worse with time as the sea-level rise accelerates,” Obeysekera says.

Using fresh water to block the salt water will eventually become impractical, because the amount of fresh water needed would submerge ever larger areas behind the control structures, in effect flooding the state from the inside. “With 50 centimeters [about 20 inches] of sea-level rise, 80 percent of the salinity-control structures in Florida will no longer be functional,” says Wanless. “We’ll either have to drown communities to keep the freshwater head above sea level or have saltwater intrusion.” When sea level rises two feet, he says, Florida’s aquifers may be poisoned beyond recovery. Even now, during unusually high tides, seawater spouts from sewers in Miami Beach, Fort Lauderdale, and other cities, flooding streets.

In a state exposed to hurricanes as well as rising seas, people like John Van Leer, an oceanographer at the University of Miami, worry that one day they will no longer be able to insure—or sell—their houses. “If buyers can’t insure it, they can’t get a mortgage on it. And if they can’t get a mortgage, you can only sell to cash buyers,” Van Leer says. “What I’m looking for is a climate-change denier with a lot of money.”

Unless we change course dramatically in the coming years, our carbon emissions will create a world utterly different in its very geography from the one in which our species evolved. “With business as usual, the concentration of carbon dioxide in the atmosphere will reach around a thousand parts per million by the end of the century,” says Gavin Foster, a geochemist at the University of Southampton in England. Such concentrations, he says, haven’t been seen on Earth since the early Eocene epoch, 50 million years ago, when the planet was completely ice free. According to the U.S. Geological Survey, sea level on an iceless Earth would be as much as 216 feet higher than it is today. It might take thousands of years and more than a thousand parts per million to create such a world—but if we burn all the fossil fuels, we will get there.

No matter how much we reduce our greenhouse gas emissions, Foster says, we’re already locked in to at least several feet of sea-level rise, and perhaps several dozens of feet, as the planet slowly adjusts to the amount of carbon that’s in the atmosphere already. A recent Dutch study predicted that the Netherlands could engineer solutions at a manageable cost to a rise of as much as five meters, or 16 feet. Poorer countries will struggle to adapt to much less. At different times in different places, engineering solutions will no longer suffice. Then the retreat from the coast will begin. In some places there will be no higher ground to retreat to.

By the next century, if not sooner, large numbers of people will have to abandon coastal areas in Florida and other parts of the world. Some researchers fear a flood tide of climate-change refugees. “From the Bahamas to Bangladesh and a major amount of Florida, we’ll all have to move, and we may have to move at the same time,” says Wanless. “We’re going to see civil unrest, war. You just wonder how—or if—civilization will function. How thin are the threads that hold it all together? We can’t comprehend this. We think Miami has always been here and will always be here. How do you get people to realize that Miami—or London—will not always be there?”

What will New York look like in 200 years? Klaus Jacob, the Columbia geophysicist, sees downtown Manhattan as a kind of Venice, subject to periodic flooding, perhaps with canals and yellow water cabs. Much of the city’s population, he says, will gather on high ground in the other boroughs. “High ground will become expensive, waterfront will become cheap,” he says. But among New Yorkers, as among the rest of us, the idea that the sea is going to rise—a lot—hasn’t really sunk in yet. Of the thousands of people in New York State whose homes were badly damaged or destroyed by Sandy’s surge, only 10 to 15 percent are expected to accept the state’s offer to buy them out at their homes’ pre-storm value. The rest plan to rebuild.

Is War Really Disappearing? New Analysis Suggests Not (Science Daily)

Aug. 29, 2013 — While some researchers have claimed that war between nations is in decline, a new analysis suggests we shouldn’t be too quick to celebrate a more peaceful world.

The study finds that there is no clear trend indicating that nations are less eager to wage war, said Bear Braumoeller, author of the study and associate professor of political science at The Ohio State University.

Conflict does appear to be less common than it had been in the past, he said. But that’s due more to an inability to fight than to an unwillingness to do so.

“As empires fragment, the world has split up into countries that are smaller, weaker and farther apart, so they are less able to fight each other,” Braumoeller said.

“Once you control for their ability to fight each other, the proclivity to go to war hasn’t really changed over the last two centuries.”

Braumoeller presented his research Aug. 29 in Chicago at the annual meeting of the American Political Science Association.

Several researchers have claimed in recent years that war is in decline, most notably Steven Pinker in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.

As evidence, Pinker points to a decline in war deaths per capita. But Braumoeller said he believes that is a flawed measure.

“That accurately reflects the average citizen’s risk from death in war, but countries’ calculations in war are more complicated than that,” he said.

Moreover, since population grows exponentially, it would be hard for war deaths to keep up with the booming number of people in the world.

Because we cannot predict whether wars will be quick and easy or long and drawn-out (“Remember ‘Mission Accomplished?'” Braumoeller says) a better measure of how warlike we as humans are is to start with how often countries use force — such as missile strikes or armed border skirmishes — against other countries, he said.

“Any one of these uses of force could conceivably start a war, so their frequency is a good indication of how war prone we are at any particular time,” he said.

Braumoeller used the Correlates of War Militarized Interstate Dispute database, which scholars from around the world study to measure uses of force up to and including war.

The data shows that the uses of force held more or less constant through World War I, but then increased steadily thereafter.

This trend is consistent with the growth in the number of countries over the course of the last two centuries.

But just looking at the number of conflicts per pair of countries is misleading, he said, because countries won’t go to war if they aren’t “politically relevant” to each other.

Military power and geography play a big role in relevance; it is unlikely that a small, weak country in South America would start a war with a small, weak country in Africa.

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years.

While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war.

“With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn’t get credit for being more peaceful just because we’re not as able fight as we once were,” he said.

“There is no indication that we actually have less proclivity to wage war.”

How Vegetation Competes for Rainfall in Dry Regions (Science Daily)

Aug. 30, 2013 — The greater the plant density in a given area, the greater the amount of rainwater that seeps into the ground. This is due to a higher presence of dense roots and organic matter in the soil. Since water is a limited resource in many dry ecosystems, such as semi-arid environments and semi-deserts, there is a benefit to vegetation to adapt by forming closer networks with little space between plants. 

Vertical aerial view of a tiger bush plateau in Niger. Vegetation is dominated by Combretum micranthum and Guiera senegalensis. Image size : 5 x 5 km on the ground. Satellite image from the Declassified corona KH-4A national intelligence reconnaissance system, 1965-12-31. (Credit: Courtesy of the U.S. Geological Survey)

Hence, vegetation in semi-arid environments (or regions with low rainfall) self-organizes into patterns or “bands.” The pattern formation occurs where stripes of vegetation run parallel to the contours of a hill, and are interlaid with stripes of bare ground. Banded vegetation is common where there is low rainfall. In a paper published last month in the SIAM Journal on Applied Mathematics, author Jonathan A. Sherratt uses a mathematical model to determine the levels of precipitation within which such pattern formation occurs.

“Vegetation patterns are a common feature in semi-arid environments, occurring in Africa, Australia and North America,” explains Sherratt. “Field studies of these ecosystems are extremely difficult because of their remoteness and physical harshness; moreover there are no laboratory replicates. Therefore mathematical modeling has the potential to be an extremely valuable tool, enabling prediction of how pattern vegetation will respond to changes in external conditions.”

Several mathematical models have attempted to address banded vegetation in semi-arid environments, of which the oldest and most established is a system of partial differential equations, called the Klausmeier model.

The Klausmeier model is based on a water redistribution hypothesis, which assumes that rain falling on bare ground infiltrates only slightly; most of it runs downhill in the direction of the next vegetation band. It is here that rain water seeps into the soil and promotes growth of new foliage. This implies that moisture levels are higher on the uphill edge of the bands. Hence, as plants compete for water, bands move uphill with each generation. This uphill migration of bands occurs as new vegetation grows upslope of the bands and old vegetation dies on the downslope edge.

In this paper, the author uses the Klausmeier model, which is a system of reaction-diffusion-advection equations, to determine the critical rainfall level needed for pattern formation based on a variety of ecological parameters, such as rainfall, evaporation, plant uptake, downhill flow, and plant loss. He also investigates the uphill migration speeds of the bands. “My research focuses on the way in which patterns change as annual rainfall varies. In particular, I predict an abrupt shift in pattern formation as rainfall is decreased, which dramatically affects ecosystems,” says Sherratt. “The mathematical analysis enables me to derive a formula for the minimum level of annual rainfall for which banded vegetation is viable; below this, there is a transition to complete desert.”

The model has value in making resource decisions and addressing environmental concerns. “Since many semi-arid regions with banded vegetation are used for grazing and/or timber, this prediction has significant implications for land management,” Sherratt says. “Another issue for which mathematical modeling can be of value is the resilience of patterned vegetation to environmental change. This type of conclusion raises the possibility of using mathematical models as an early warning system that catastrophic changes in the ecosystem are imminent, enabling appropriate action (such as reduced grazing).”

The simplicity of the model allows the author to make detailed predictions, but more realistic models are required to further this work. “All mathematical models are a compromise between the complexity needed to adequately reflect real-world phenomena, and the simplicity that enables the application of mathematical methods. My paper concerns a relatively simple model for vegetation patterning, and I have been able to exploit this simplicity to obtain detailed mathematical predictions,” explains Sherratt. “A number of other researchers have proposed more realistic (and more complex) models, and corresponding study of these models is an important area for future work. The mathematical challenges are considerable, but the rewards would be great, with the potential to predict things such as critical levels of annual rainfall with a high degree of quantitative accuracy.”

Journal Reference:

  1. Jonathan A. Sherratt. Pattern Solutions of the Klausmeier Model for Banded Vegetation in Semiarid Environments V: The Transition from Patterns to DesertSIAM Journal on Applied Mathematics, 2013; 73 (4): 1347 DOI:10.1137/120899510

Poor Concentration: Poverty Reduces Brainpower Needed for Navigating Other Areas of Life (Science Daily)

Aug. 29, 2013 — Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes. 

Research based at Princeton University found that poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life. Experiments showed that the impact of financial concerns on the cognitive function of low-income individuals was similar to a 13-point dip in IQ, or the loss of an entire night’s sleep. To gauge the influence of poverty in natural contexts, the researchers tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Each farmer performed better on common fluid-intelligence and cognition tests post-harvest compared to pre-harvest. (Credit: Image courtesy of Princeton University)

Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.

In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.

But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.

“These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention,” said Zhao, who is now an assistant professor of psychology at the University of British Columbia.

“Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success,” she said. “We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty.”

The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.

“Stress itself doesn’t predict that people can’t perform well — they may do better up to a point,” Shafir said. “A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly.”

The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.

“They can make the same mistakes, but the outcomes of errors are more dear,” Shafir said. “So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out.”

The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.

Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.

To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.

The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.

“These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth,” Zhao said. “Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind.”

“We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources,” Shafir added. “It’s not about being a poor person — it’s about living in poverty.”

Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.

“When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting,” Shafir said. “It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity.”

The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.

“You want to design a context that is more scarcity proof,” said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.

“There’s very little you can do with time to get more money, but a lot you can do with money to get more time,” Shafir said. “The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help.”

Journal Reference:

  1. A. Mani, S. Mullainathan, E. Shafir, J. Zhao. Poverty Impedes Cognitive FunctionScience, 2013; 341 (6149): 976 DOI: 10.1126/science.1238041

Paraíso sitiado (O Globo)

O drama dos índios Awá e a resistência de seu povo que tenta impedir a ação criminosa de madeireiros na Reserva Biológica Gurupi, onde o território indígena já perdeu 30% de sua paisagem original.

REPORTAGEM: MÍRIAM LEITÃO – FOTOS: SEBASTIÃO SALGADO

Vídeos

QUANDO A SOBREVIVÊNCIA EXIGE CORAGEM

O drama do povo Awá na luta contra o desmatamento na Reserva Biológica Gurupi

TESTEMUNHAS DA HISTÓRIA AWÁ

A repórter Míriam Leitão fala do privilégio de acompanhar o fotógrafo Sebastião Salgado pela Aldeia Juriti

A IMPUNIDADE ROMPE O SILÊNCIO DA NOITE

Míriam Leitão flagra madeireiros em ação à noite numa serraria clandestina no interior do Maranhão

Sobreviver com coragem

Considerados um dos últimos povos caçadores e coletores do planeta, os poucos mais de 400 Awá que povoam o que restou da Floresta Amazônica no Maranhão vivem o momento mais decisivo de sua sobrevivência: impedir que grileiros, posseiros e madeireiros destruam o seu mais valioso bem. É das árvores e da mata densa situadas na Reserva Biológica do Gurupi, de onde tiram o seu alimento, a sua certeza de amanhã poderem garantir a continuação de seu povo, de sua gente. Eles não querem nada mais do que a garantia do governo federal de que não terão o seu terrítório devastado pela ganância do homem branco, que avança a passos largos em busca de madeira nobre.

Apesar de sua terra já estar demarcada, homologada e registrada com 116.582 hectares pela União, eles enfrentam uma ameaça real de assistir à destruição da floresta da qual são tão dependentes e de onde tiram o sustento de seus filhos. Ainda que a Justiça já tenha determinada a retirada desses ‘intrusos’ ou não índios, como define a Funai, os Awá temem pela própria sorte, se afirmam em sua coragem e não vacilam quando veem sua resistência em xeque. “Não temos medo. Vamos resistir”, dizem em discursos emocionados.

A repórter Míriam Leitão, a convite do renomado fotógrafo Sebastião Salgado, viajou até a Aldeia Juriti e pôde comprovar como os Awá vivem essa dramática expectativa. Neste ambiente especial, que complementa a série de reportagens publicadas na edição dominical de O GLOBO, o leitor poderá saber mais do cotidiano dos chamados ‘índios invisíveis’, como vivem, e como reverenciam a sua sagrada cultura.

Sem Título-1Reserva Biológica Gurupi
Terra Indígena Awa
Terra Indígena Caru
Terra Indígena Alto Turiaçu

Áudio

A AMEAÇA DOS MADEIREIROS

Sebastião Salgado se prepara para fotografar os Awá

O DISCURSO AWÁ

Ouça trecho da fala de uma das lideranças Awá

DENÚNCIA

A repórter Míriam Leitão flagra a ação ilegal de madeireiros em uma serraria

O JOVEM GUERREIRO JUI’I

Ouça trecho de seu discurso

OS ÍNDIOS INVISÍVEIS

O antropólogo Uirá Garcia fala sobre a cultura Awá

O CANTO DA CAÇA

Ouça o canto do jovem guerreiro antes de ir à caça

“NÓS TEMOS CORAGEM TAMBÉM”

O jovem guerreiro Jui’i fala da ameaça dos madeireiros

Textos

A LUTA CONTRA A DESTRUIÇÃO DOS MADEIREIROS

Awás tentam sobreviver à ação criminosa dos desmatadores

PARA OS AWÁ, A TRAGÉDIA DO DESMATAMENTO ATINGE A TERRA E O CÉU

Fim da floresta os impedirá de virar ‘Karauaras’, seres que habitam o mundo após a morte

SILENCIOSOS, AWÁ SE CONFUNDEM COM A MATA

Audição acima dos padrões comuns permite ouvir som da devastação a quilômetros

‘ESTAMOS BRAVOS. ASSIM ELES VÃO NOS MATAR’, DIZ LIDERANÇA AWÁ

Após vencer a desconfiança dos índios, ouve-se o desabafo: ‘Quero ficar na minha casa’

FOTOGALERIA

Os Awás pelas lentes de Sebastião Salgado

MADEIREIROS IMPÕEM SUA LEI

Em emboscadas, comerciantes de madeira demostram ter mais força do que a Polícia Federal e a Força Nacional juntas

NO CAMINHO DA VOLTA, O ENCONTRO COM O CRIME

Na estradas que ligam terra Awá, grileiros, serrarias e caminhões agem na certeza da impunidade

MINISTRO: RETIRADA DE TERRA AWÁ TERÁ PF, IBAMA E EXÉRCITO

Ministro da Justiça programa operação, atrasada pela vinda do Papa, já para este semestre

REPORTAGEM: MÍRIAM LEITÃO | FOTOS: SEBASTIÃO SALGADO | EDIÇÃO: DANIEL BIASETTO | MAPA: DANIEL LIMA | ARTE E DESENVOLVIMENTO: GUSTAVO SARAIVA | DESENVOLVIMENTO: AYRTON TESHIMA

Os Awá-Guajá viram bichos-pau (Yahoo! Notícias)

Por  – ter, 20 de ago de 2013

As primeiras fotos de índios surgiram no início mesmo da invenção e popularização dos daguerreótipos e câmeras fotográficas. Pelo Oeste americano intrépidos aventureiros arriscavam suas vidas e suas imensas geringonças para obter um clique de um grupo de guerreiros a cavalo, um retrato de um chefe indígena Sioux, Apache ou Comanche engalonado em suas casacas de couro de búfalo e seus exuberantes cocares de pena de águia. Posavam hirtos, de cara dura, olhando para o horizonte infinito, como se estivessem em alguma solenidade com autoridades estrangeiras, quem sabe, o próprio presidente americano, manifestando sua dignidade humana para preservar ou recuperar seus territórios e ocupar um lugar digno no novo mundo que se criava ao seu redor.

Mas o destino lhes foi cruel demais.

Das fotos solenes, ao final dos anos de resistência (1830-1880) em que o Oeste foi definitivamente incorporado aos Estados Unidos da América, passou-se à dessacralização dos índios, quando até um líder de grande respeito, como Touro Sentado, chefe dos Sioux que destroçou o 6º Regimento de William Custer, na famosa batalha de Little Big Horn, se submeteu a ser uma das estrelas do famoso circo do fanfarrão Buffalo Bill, montando cavalos, dando gritos de guerra e empunhando um rifle winchester com balas de festim. O Wild West Circus fez história se apresentando nas cidades e bribocas que se formavam por todo o imenso centro-oeste americano. Mutatis mutandi, não pensem que no Brasil seja diferente!

As primeiras fotos de índios brasileiros, passado o tempo de viajantes estrangeiros fazendo desenhos e aquarelas, foram tiradas em cidades como Manaus e Cuiabá. Marc Ferrez, famoso por suas fotos do Rio de Janeiro, conseguiu levar um grupo de 11 índios Bororo para um studio em Cuiabá e os fotografou com maestria, mostrando como seres humanos dignos, em toda sua nudez virtuosa, ainda em 1880.

Indios Bororo , coleção Gilberto Ferrez

Indios Bororo , coleção Gilberto Ferrez

No campo, nas matas, nos cerrados, ao vivo em seus ambientes, fotos de índios brasileiros vão surgir pelas lentes de viajantes, cientistas e, no começo do século XX, pela Comissão Rondon, que percorreu todo o oeste do Mato Grosso, Rondônia e várias partes da Amazônia. Os índios aparecem ora desnudos completamente, com algum pano, uma tanga inventada na hora, ora vestidos em camisas sem gola, manga comprida, calças simples, como os pobres brasileiros da época, pés descalços, um ou outro em uniforme militar, as mulheres de saia e os seis expostos. Exceto nas missões, quando as saias desciam até os calcanhares.

A coloração em preto, cinza e branca dessas fotos é do tipo que hoje se chama sépia, a qual faz as imagens se diferenciarem tão somente pela textura e forma dos objetos, como se o mundo fosse uma penumbra. Visualizando isso, o espectador precisava de um esforço intelectual para ver e dar significado às distintas imagens. Por esse esforço as imagens ganhavam um significado muito além do real corriqueiro. De algum modo elas se sacralizavam, como se fosse um objeto antigo ou precioso. Daí porque naqueles tempos tornara-se de praxe os amigos se presentearem com retratos, que eram solenemente expostos nas salas e nos escritórios. Daí porque as fotos eram tratadas com reverência e carinho, e eram beijadas como se representassem as pessoas vivas.

A nitidez da coloração das fotos, desde os anos 1960, mudou o modo como as vemos e elas foram aos poucos se vulgarizando, tanto pela banalidade de sua existência quanto principalmente pelo realismo que elas nos evocam. A arte da fotografia, consequentemente, passou a requerer mais sutileza de luzes para obter algum senso de sacralidade do objeto visado.

E aqui chegamos ao objetivo desse artigo – as fotos tiradas pelo fotógrafo profissional Sebastião Salgado dos índios Awá-Guajá, do Maranhão, recentemente publicadas pelo jornal O Globo, em reportagem de Miriam Leitão.

Nessas fotos, os Guajá, a última sociedade a viver quase que exclusivamente da caça, pesca e coleta de animais, frutos e tubérculos da floresta, são fotografados em coloração sépia, com pouca luz, sob um fundo “natural” de árvores, raízes e chão. Apresentam-se nus, os homens com seus prepúcios amarrados com fibras de tucum, braceletes e auréolas de penas, meninos e meninas sem nada, e as mulheres em seus saiotes tecidos de fibras de tucum, bem como as tipóias em que carregam seus bebês.

Algumas fotos, talvez as que mais calaram fundo com os propósitos do fotógrafo mineiro-europeu, trazem grupos de homens e meninos adornados a caráter, todos em pé, fisionomias sérias, porém mudos e imóveis, arcos à mão, numa clara alusão de que são parte da floresta que lhes ladeia como o cenário de fundo e de compartilhamento.

Nas fotos de Salgado não há informação etnográfica, exceto aquela em que um caçador, que o reconheço pelo nome de Mutumhû, porta um macaco guariba morto pendurado às suas costas, com um olhar de inadvertida preocupação. Não sabemos como vivem os Guajá, como se alimentam, como amam e cuidam dos filhos, como se divertem e como sofrem. Não há tempo aqui. Nem eternidade, nem instantaneidade. Tempo morto.

A jornalista Miriam Leitão, emocionada com o quê viu, produziu alguns textos nos quais procura mostrar que os Guajá estão em perigo de sobrevivência, ecoando inadvertidamente as matérias da Survival International, uma ONG inglesa que alardeia que os Guajá são o povo em maior grau de perigo de sobrevivência do mundo. No total, os Guajá somam cerca de 360 pessoas, mas eram menos de 200 na década de 1980. A terra indígena visitada por Salgado e Leitão, chamada de Awá-Guajá, está parcialmente invadida por madeireiros e posseiros, sem dúvida, mas não consta nas matérias informação sobre por que isto está acontecendo e se a presença da FUNAI é eficiente ou não para deter esse perigo e para dar assistência aos índios. Por que tudo está tão ruim no indigenismo brasileiro da atualidade?

Os Guajá, em virtude de sua característica cultural de mobilidade, se dispersaram há mais de 100 anos por uma vasta área do oeste maranhense, e hoje se encontram morando em quatro terras indígenas, com pouco contato entre si. A terra indígena Awá-Guajá, com 116.000 hectares, conecta as terras indígenas Caru (172.000 ha) e Alto Turiaçu (530.000 ha), que juntas somam 718.000 hectares, compartilhadas com os povos Guajajara e Kaapor. Em vários trechos delas os últimos madeireiros do Maranhão se esbaldam, traçando picadas pelo meio da mata por onde passam seus tratores e caminhões carregados de toras de madeira de lei, sob as vistas grossas de alguns índios não Guajá, cooptados por migalhas, com rara presença do órgão indigenista, ultimamente em completa inatividade e decadência.

Aqueles Guajá que residem na Terra Indígena Awá-Guajá são, por ironia, os que estão mais bem protegidos por funcionários dedicados do órgão indigenista, funcionários que passam às vezes mais de 60 dias sem saírem do posto indígena perto do qual se fixou, há mais de 20 anos, o grupo Guajá visitado. Na verdade, o posto indígena não existe mais, foi extinto por um decreto presidencial, e só por teimosia é que os resolutos funcionários nele permanecem. É provável que tenham que se retirar de vez ainda este mês. Se ao menos essa informação e um pouco do histórico da luta desigual que esses funcionários travam para manter a dignidade dos índios, à indiferença das atitudes do poder federal, fossem divulgados aos leitores, algo mais verdadeiro e esperançoso teria surgido dessa expedição e das fotos obtidas.

Ilustrar com fotos artísticas e etnograficamente relevantes para informar o espectador e sensibilizá-lo para a causa indígena tem sido o mais nobre dos propósitos de fotógrafos e jornalistas. Neste caso, a informação jornalística sem dúvida pode pressionar o governo a tomar as medidas necessárias para seguir as ordens judiciais, exaradas há algum tempo pela Justiça Federal do Maranhão, para retirar de uma vez por todas todos os invasores que teimam em permanecer nessa terra, e coibir definitivamente as atividades destrutivas dos madeireiros. Nada disso é fácil, nas circunstâncias do anti-indigenismo oficial que estamos vivenciando. Este é o sentido que se esperava desta reportagem publicada em O Globo pelos dois competentes profissionais da informação.

Porém, para além desse propósito, as fotos de Sebastião Salgado têm um objetivo considerado transcendente. Salgado está engajado num projeto patrocinado por financiadores particulares e chancelado pela ONU com o intuito visionário, e, dir-se-ia, poético, no sentido de criativo, de captar do mundo atual, tão diversificado na natureza e nas culturas, mas tão dilapidado e com tendência à homogeneização, aquilo que representaria a gênese mesmo da Terra, da vida e do homem. Uma espécie de arqueologia ao vivo.

Assim, Salgado vem fotografando geleiras intocáveis, vulcões flamejantes, montanhas inalcançáveis e povos remotos e “imutáveis” na África, Ásia e Américas. Há sete anos passou um mês morando em duas aldeias dos índios do Alto Xingu capturando sua vida autêntica ao máximo. Durante a cerimônia do Kwarup, quando os índios celebram o fim de um período de luto pela morte de um parente respeitado, Salgado fez questão absoluta de fotografar a efeméride sem a presença de qualquer objeto exógeno à cultura xinguana, seja um cigarro de palha ou uma tira de pano, muito menos alguém não indígena, apesar da aldeia ter isso tudo e mais. As fotos são ahistóricas, pois todos os seus signos foram escoimados pelas lentes prístinas do fotógrafo. Entretanto, os índios xinguanos não vivem e não pensam assim: sabem que estão no meio do redemoinho histórico, sendo e vindo a ser, procurando seu lugar no mundo da atualidade, não flutuando em um éter deshistorizado.

Os Guajá sabem muito menos do nosso mundo, tão recente e tão sofrido tem sido sua aproximação conosco. Nós sabemos muito menos sobre eles. Sua cultura tradicional é viva, apesar dos objetos exógenos. Assim, retratá-los como figurantes da natureza, e mostrá-los como seres quase ctônicos, não deve ter sido difícil para Sebastião Salgado. Mas, ao invés do resultado se tornar uma imagem de auto-conhecimento para os Guajá para avançar em sua compreensibilidade do mundo que os massacra, e de ser um modo de nós os conhecer melhor para amá-los e os ajudar a sobreviver e encontrar um lugar seguro nesse mundo convoluto, essas fotografias se tornam um desserviço para todos nós, os Guajá em especial.

Por essas fotos, os Guajá se tornaram seres da natureza, animais ou vegetais, folhagem, até talvez bichos-pau mimetizados na sépia florestal. A estética da ecologização do mundo virou uma estética da desumanização do homem pelas lentes de um brasileiro que perdeu o senso de brasilidade e anda pelo mundo apegado às cadeias do espetáculo do faz-de-conta.

Can weather swing an election? (Discover)

By Seriously Science | May 22, 2013 10:30 am

Photo: flickr/hjl

It’s pretty obvious that weather can affect overall voter turnout; many people just don’t want to go out in the rain, even if it’s to exercise their civic duty. But does weather affect some political parties more than others? Are right-wing voters more likely to skip the polls on a rainy day? Do Democrats forget to vote when the surf’s up? Well, not many people go surfing in the Netherlands, but they do have elections and weather, and this study describes the relationship between the two.

Weather conditions and political party vote share in Dutch national parliament elections, 1971-2010.

“Inclement weather on election day is widely seen to benefit certain political parties at the expense of others. Empirical evidence for this weather-vote share hypothesis is sparse however. We examine the effects of rainfall and temperature on share of the votes of eight political parties that participated in 13 national parliament elections, held in the Netherlands from 1971 to 2010.This paper merges the election results for all Dutch municipalities with election-day weather observations drawn from all official weather stations well distributed over the country. We find that the weather parameters affect the election results in a statistically and politically significant way. Whereas the Christian Democratic party benefits from substantial rain (10 mm) on voting day by gaining one extra seat in the 150-seat Dutch national parliament, the left-wing Social Democratic (Labor) and the Socialist parties are found to suffer from cold and wet conditions. Cold (5°C) and rainy (10 mm) election day weather causes the latter parties to lose one or two parliamentary seats.”

Related content:
Discoblog: NCBI ROFL: Surprise! Men vote for the hotter female candidate.
Discoblog: NCBI ROFL: Election week flashback: Democrats and Republicans can be differentiated from their faces.
Discoblog: NCBI ROFL: Voters’ testosterone changes on the night of the 2008 United States presidential election.

Master of Disaster (Discover)

Earthquakes and hurricanes will always wreak havoc, but risk management expert Robert Bea says the greatest tragedies result from hubris and greed.

By Linda Marsa|Thursday, May 23, 2013

Robert-Bea

Paul Chinn/San Francisco Chronicle/Corbis

Robert Bea has an unusual specialty: He studies disasters. As one of the world’s leading experts in catastrophic risk management, the former Shell Oil Co. executive sifts through the wreckage to unravel the chain of events that triggers accidents. The blunt-spoken civil engineer has spent more than a half-century investigating high-profile engineering failures, from the space shuttle Columbia’s horrific end to the explosion of the Deepwater Horizon oil-drilling rig in the Gulf.

A professor emeritus of civil engineering at the University of California, Berkeley, Bea’s disaster autopsy methods — such as looking at the organizational breakdowns that lead to calamities — have been widely adopted. Although policymakers and corporate honchos seek his counsel, sometimes they don’t like what he has to say — witness the flak he took from BP during the Deepwater Horizon probe.

Now in his mid-70s, Bea’s voice is raspier, but his critical faculties are undimmed. On a crisp fall day, he talked with DISCOVER in his comfortable one-story house in Moraga, a leafy suburb east of Berkeley, about what causes catastrophes.

You have said that engineering failures aren’t the chief culprits behind disasters, pointing instead to human and organizational failures — inadequate safety protocols, corporate hierarchies, conflicting egos or just plain laziness. Was there an “aha moment” when this became apparent?

When I was involved in the investigation into the Piper Alpha disaster, when an explosion destroyed an Occidental Petroleum oil-drilling platform in the North Sea, killing 167 men in 1988. The external investigation team that had been hired by Occidental into what caused Piper Alpha found it was a corporate culture that had gone bad, had lost its way.

I was part of that team all the way through the Lord Cullen Commission hearings in London, and I had to listen to one of my friends explain to the Cullen Commission why he and his colleagues had turned off the smoke alarms on the platform because the operating crew was doing a routine maintenance procedure late in the evening. Unfortunately, for over a month, certain alarms had been disabled to prevent unnecessary shutdowns on the rig — in some cases as a response to practical jokes. But turning off the alarms was one of the reasons they got caught by surprise.

Ironically, two years before, I was brought in to advise Occidental on risk management for Piper Alpha because they were having gas releases, pipes were leaking. Of course, you didn’t have to be very smart to say, “Yeah, we’ve got a problem — it’s called rusty pipes. And we’ve got problems with people not doing what they should be doing, and people who don’t understand what’s happening.”

One evening, during the first year of the investigation, I saw spread out on the reception table of the Occidental offices a copy of the London Times newspaper with a great, big, bold headline that said, “Occidental puts profit before safety.”

It had a picture of one of the bandaged, beat-up, horribly scarred survivors from the disaster who was telling this to the newspaper. What this survivor was observing is true. If you don’t have profitability, you don’t have the resources to invest in achieving adequate protection. What the tension is, is having the discipline and the foresight to make those investments before you’re in trouble.

When I came back to Berkeley after the investigation was completed, I realized that for the past 50-some-odd years of my career, I’d been working on 10 percent of the problems. I’d been working on normal engineering things, and 90 percent of the problems are humans and/or organizations.

We often have ample warnings before catastrophes hit, but we tend to ignore them until it’s too late. Why?

The problem is attention span, particularly in this country because we are a pretty young country. Our knowledge of history is very limited. We are extremely blessed. Lots of good things attract our attention. It’s a noisy environment, really noisy. It’s unusual to find people who are comfortable sitting in a room by themselves thinking.

You could say the eruption of Mount St. Helens was certainly painful, but it actually affected relatively few people and then disappeared into that strong noise environment. At that point people say, “Well, it’s never happened to me.

I can’t even remember my parents talking about it, and I’ve got these new things to play with, and they require attention,” like Facebook and Twitter. And suddenly, we have flitted from something that is difficult and painful to think about back to something that is enjoyable.

Piper-Alpha-Disaster

Robert Bea helped investigate the 1988 Piper Alpha disaster, where an exploding oil platform killed 167 in the North Sea. Press Association/AP

You seem to be suggesting that people have trouble dealing with issues over the long term. Are there other examples?

Well, global climate change is a perfect one, or rising sea levels. It’s happening slowly. People love living by the beach, so they build a beautiful home on a concrete slab, on top of the sand a few feet above sea level, and [ignore the fact] that the sea level is [rising]. So thinking about these slowly evolving long-term things, it is painful. It says, “Well, I might have to move my home. I really enjoy the beach,” and we don’t like to give those things up.

Is this inability to think long term also true of organizations — corporations or government agencies?

Yes. The equation for disaster is A + B = C. A is natural hazards, things like hurricanes, gases and liquids under pressure that are extremely volatile. They’re volcanoes. They’re tsunamis. They’re natural, and there’s nothing unusual about them. B is organizational hazards: people and their hubris, their arrogance, their greed. The real killer is our indolence.

So human error is the kindling that escalates a natural hazard — a hurricane, a tsunami, chemicals under pressure — into C, a catastrophic disaster. Can you give me some examples?

Hurricane Ike. Galveston, Texas, got completely wiped out in 1900. Thousands of people got killed. So the U.S. Army Corps of Engineers built a seawall on Galveston Island, and that sucker has gone through every major hurricane since 1900.

But people think that if a storm hasn’t happened since they lived there, somehow it can’t happen to them. This is where B comes in — the hubris and shortsightedness. Because a hurricane hadn’t flattened the city in decades, civic leaders decided to let people build at sea level again. And when Hurricane Ike came through in 2008, it was just like Berlin at the end of the second world war. Everything was gone.

Before Superstorm Sandy, I wrote that the subways were going to flood, but no one did anything. Mayor Bloomberg even hired some of my engineer friends from the Netherlands to come to New York City and advise him about building gates to cut off incoming hurricane surges.

But here we’re back to B — hubris and shortsightedness. People think because they’ve never seen a storm like what happened in New Jersey or they’ve never seen the tunnels flooded in New York City that it can’t happen, or that they need to think about building a levee.

Hurricane-Katrina

Disaster specialist Robert Bea studied the devastating aftermath of Hurricane Katrina, which left New Orleans flooded in 2005. NOAA

When I lived in New Orleans, we lost everything in Hurricane Betsy [in 1965]: our house, wedding photographs, marriage license, birth certificates. Yet 40 years later, after Katrina, I go back to the same place. There’s a new home built on the foundation, and the owners are dragging wet, oily mattresses out the front door.

Luckily I had no one with me that morning, but I broke down and cried. It wasn’t tears of sadness. It was tears of frustration at such a miserable, despicable mess. While we can’t prevent disaster, we can do things that are more sensible to mitigate risks, like maybe not building homes in floodplains.

But the cities are already there. Are you going to move entire cities?

In some cases, yes. We did it in the Mississippi River Valley after the 1993 floods. We actually moved entire towns to higher ground, like Valmeyer, Ill., and Rhineland, Mo., because we suddenly recognized they’d rebuilt them five times in the same damn place. Doing it six times doesn’t quite make sense. But there is not a “one size fits all” answer.

In other cases, there are intermediate solutions that can work, such as occupying only what you can defend properly and in a sustainable manner. An example is the “new New Orleans,” where parts of the city outside of the defended perimeter of the levee system can be expected to flood severely and frequently. Individuals there are building structures on higher ground and making them stronger, and preparing to take care of themselves in future storms.

But even after Superstorm Sandy’s devastation, you can’t just completely rebuild Lower Manhattan.

No, you can’t. But you can follow the example of the massive Thames Barrier that has been built in London [10 steel gates that prevent the city from being flooded by tidal surges].

But that cost over 500 million pounds — about $850 million — when it was completed in 1982. Doing the same in Manhattan would cost up to $17 billion and an additional $10 billion to $12 billion to shore up areas next to the barriers, an astonishing amount of money.

Well, do you want to fix it now or fix it later, when it will cost 100 times as much? Damages from Superstorm Sandy in New York and New Jersey are estimated at $60 billion to $70 billion. The key question I always ask is, “Can we work this problem out in a responsible way, or do we wait until it fails — in other words, will we fix it now versus pick up the pieces later?”

We looked into this “pay me now or pay me later,” and in many cases, it’s more than 100 times the cost to fix afterward. We economically documented several major accidents where the factor is bigger than 1,000, like Katrina.

People regularly ignore risks, but isn’t it the extreme scenario — the thing that has the 1 percent chance of happening, the so-called long tail at each end of the bell curve — that causes all the trouble? What about events beyond the realm of normal expectations, like Superstorm Sandy? 

Sandy is better classified as a “predictable surprise.” There were some groups in the greater New York area that had a clear understanding of the potential for significant flooding due to an intense storm. But there were other groups who had no idea and made no significant efforts to learn how vulnerable the city was to storm surges from the Atlantic.

Galveston-hurricane

A hurricane flattened Galveston, Texas, in 1900. Hurricane Ike in 2008 took out the city again. Library of Congress

What do you do when it becomes more costly to prepare for every disaster than just to take the risk? How safe is safe enough?

In many cases, you can’t prepare because no one’s willing to spend money to do this. We’ve rebuilt the levees around New Orleans, for example, but they’re right back on the same slabs. And it’s going to cost about 10 percent of the construction costs per year to maintain them.

Now where are you going to get $1.5 billion a year? So they can’t maintain it, and the next time it is challenged — and there is no doubt in my mind it will be challenged severely — it’s going to fail again, and the consequences are going to be worse because we’ve allowed more population and more infrastructure.

In aviation, which has unequaled safety records, they do predict and plan for the worst case because they can’t afford accidents. But Sully Sullenberger’s “Miracle on the Hudson,” when the pilot landed the plane on the Hudson River after flying into a flock of geese, does seem like a miracle.

It was a miracle, but it was not an accident. It was, in fact, rehearsed. That’s how the FAA was able to clear air traffic control, how he and his co-pilot, together with the flight crew, were able to get almost everything right, how the French Airbus had those backflow valves already designed in the fuselage.

We don’t normally land airplanes in water. They’re supposed to be on land, but they designed it to land in water as well, and the crew had rehearsed and planned what they would do in the event of a water landing. They are thinking about the impossible.

What warnings are going unheeded now? The antiquated levee system protecting California’s Sacramento Delta, which is the source of fresh water for 28 million of the state’s residents, comes to mind. I know you’ve been studying this.

An earthquake, a megaflood. Any of these natural disasters could rupture the delta levees and take a lot of the infrastructure — power lines, communication networks, gas pipelines, hydroelectric power systems — with it. Millions could be without power or fresh water for months. It isn’t going to be pretty, and it will knock out of commission the ninth-largest economy in the world.

And it could happen at any time.

Yeah. Tick, tick, tick.

 

Deepwater-Horizon
US Coast Guard

Bea goes back in time to explain how we might have avoided notable catastrophes of the past.

 

Deepwater Horizon, 2010

The Deepwater Horizon oil-drilling rig exploded in the Gulf of Mexico, killing 11 crewmen and igniting a fire that could not be extinguished. Two days later the rig sank, leaving the well gushing in the seabed. It ultimately leaked 210 million gallons of oil in the largest offshore spill in U.S. history.

» What was ignored? BP, the industry and federal regulators understood the potential for an uncontrolled blowout in Deepwater drilling, but they failed to heed warnings about the structural weaknesses in the cement casings that protect the well pipes. Plus, the blowout preventer hadn’t been working properly for several days.

» What would have improved the outcome? Delivering the required degree of safety consistently and ensuring that protocols were followed. Understanding that wells pumping 162,000 barrels of oil a day are under much higher pressure, and therefore are much more dangerous, than wells delivering 500 barrels a day. Building stronger protective structures to withstand these intense pressures could have prevented the disaster.

Midwest Floods, 2008

Months of rainstorms led to heavy flooding in seven Midwestern states, including Illinois, Minnesota, Indiana and Missouri, resulting in 24 deaths and more than $6 billion in damage.

» What was ignored? The lessons from floods in 1993. The increasing fragility of the aging levee system, and more people building in low-lying areas susceptible to flooding.

» What would have improved the outcome? Giving the water the room it needed to flood and not build in those areas. Building and maintaining high-quality flood protection systems in the areas that could have been protected.


Columbia Disaster, 2003 

The space shuttle disintegrated upon re-entry into the Earth’s atmosphere due to missing heat shield tiles, killing all seven astronauts on board.

» What was ignored? The mantra “better, faster, cheaper” drove the management decision to “bring the bird back,” even though the U.S. Air Force had photographs showing the leading edge of the Columbia was missing heat shield tiles, which were damaged during the launch. Problems with heat shield tiles during earlier missions hadn’t been adequately addressed, and NASA ignored engineers’ requests to ground the mission until these problems were solved.

» What would have improved the outcome? Fixing heat shield tile problems uncovered during earlier missions. Develop a backup plan for fixing the shuttle in space if it is damaged and ensuring the crew’s safety. They were lucky with previous missions that had missing tiles, but they took a chance, and their luck ran out.


Exxon Valdez Crash, 1989

The Exxon Valdez oil tanker ran aground in Prince William Sound, spilling an estimated 11 million gallons of crude oil along Alaskan shores. Numerous factors complicated cleanup, including the size of the spill, the remote location and a lack of readily available equipment and effective chemical dispersants to dissolve the thick oil.

» What was ignored? The dangers of Bligh Reef, where the ship hit the rocks. Taking shortcuts to save time to get the oil to Southern California refineries. Not traveling in regulated shipping lanes.

» What would have improved the outcome? Better communication between vessel captains and traffic control centers to avoid treacherous conditions. True pollution control to improve visibility, and better preparation for cleanup.

Hello, Hal (New Yorker)

Will we ever get a computer we can really talk to?

BY 

JUNE 23, 2008

The challenge is to marry our two greatest technologies: language and toolmaking.

The challenge is to marry our two greatest technologies: language and toolmaking.

Not long ago, a caller dialled the toll-free number of an energy company to inquire about his bill. He reached an interactive-voice-response system, or I.V.R.—the automated service you get whenever you dial a utility or an airline or any other big American company. I.V.R.s are the speaking tube that connects corporate America to its clients. Companies profess boundless interest in their customers, but they don’t want to pay an employee to talk to a caller if they can avoid it; the average human-to-human call costs the company at least five dollars. Once an I.V.R. has been paid for, however, a human-to-I.V.R. call costs virtually nothing.

“If you have an emergency, press one,” the utility company’s I.V.R. said. “To use our automated services or to pay by phone, press two.”

The caller punched two, and was instructed to enter his account number, which he did. An alert had been placed on the account because of a missed payment. “Please hold,” the I.V.R. said. “Your call is being transferred to a service representative.” This statement was followed by one of the most commonly heard sentences in the English language: “Your call may be monitored.”

In fact, the call was being monitored, and I listened to it some months later, in the offices of B.B.N. Technologies, a sixty-year-old company, in Cambridge, Massachusetts. Joe Alwan, a vice-president and the general manager of the division that makes B.B.N.’s “callerexperience analytics” software, which is called Avoke, was showing me how the technology can automatically create a log of events in a call, render the speech as text, and make it searchable.

Alwan, a compact man with scrunchedtogether features who has been at B.B.N. for two years, spoke rapidly but smoothly, with a droll delivery. He projected a graphic of the voice onto a screen at one end of the room. “Anger’s the big one,” he said. Companies can use Avoke to determine when their callers are getting angry, so that they can improve their I.V.R.s.

The agent came on the line, said his name was Eric, and asked the caller to explain his problem. Eric had a slight Indian accent and spoke in a high, clear voice. He probably worked at a call center in Bangalore for a few dollars an hour, although his pay was likely based on how efficiently he could process the calls. “The company doesn’t want to spend more money on the call, because it’s a cost,” Alwan said. The caller’s voice gave the impression that he was white (particularly the way he pronounced the “u” in “duuude”) and youthful, around thirty:

CALLER: Hey, what’s going on is, ah, I got a return-payment notice, right?
AGENT: Mhm.
CALLER: And I checked with my bank, and my bank was saying, well, it didn’t even get to you . . . they didn’t reject it. So then I was just, like, what’s the issue, and then, ah, you guys charge to pay over the phone, so that’s why it’s not done over the phone, so that’s why I do it on the Internet, so—
AGENT: O.K.
CALLER: So I don’t . . . know what’s going on.

The caller sounded relaxed, but if you listened closely you could hear his voice welling with quiet anger.

The agent quickly looked up the man’s record and discovered that he had typed in his account number incorrectly. The caller accepted the agent’s explanation but thought he shouldn’t be liable for the returned-payment charge. He said, “There’s nothing that can be done with that return fee, dude?” The agent explained that another company had levied the charge, but the caller took no notice. “I mean, I would be paying it over the phone, so you guys wanna charge people for paying over the phone, and I’ll be—”

People express anger in two different ways. There’s “cold” anger, in which words may be overarticulated but spoken softly, and “hot” anger, in which voices are louder and pitched higher. At first, the caller’s anger was cold:

AGENT: O.K., sir. I’m gonna go ahead and explain this. . . . O.K., so on the information that you put this last time it was incorrect, so I apologize that you put it incorrectly on the site.
CALLER: O.K., we got past that, bro. So tell me something I don’t know. . . .
AGENT: Let’s see . . . uh . . . um.
CALLER: Dude, I don’t care what company it is. It’s your company using that company, so you guys charge it. So you guys should be waiving that shit-over-the-phone shit, pay by phone.
AGENT: But why don’t you talk to somebody else, sir. One moment.

By now, the caller’s anger was hot. He was put on hold, but B.B.N. was still listening:

CALLER: Motherfucker, I swear. You fucking pussy, you probably don’t even have me on hold, you little fucked-up dick. You’re gonna wait a long time, bro.
You little bitch, I’ll fucking find out who you are, you little fucking ho.

After thirty seconds, we could hear bubbling noises—a bong, Alwan thought—and then coughing. Not long afterward, the caller hung up.

This spring marked the fortieth anniversary of HAL, the conversational computer that was brought to life on the screen by Stanley Kubrick and Arthur C. Clarke, in “2001: A Space Odyssey.” HAL has a calm, empathic voice—a voice that is warmer than the voices of the humans in the movie, which are oddly stilted and false.HAL says that he became operational in Urbana, Illinois, in 1992, and offers to sing a song. HAL not only speaks perfectly; he seems to understand perfectly, too. I was a nine-year-old nerd in the making when the film came out, in 1968, and I’ve been waiting for a computer to talk to ever since—a fantasy shared by many computer geeks. Bill Gates has been touting speech recognition as the next big thing in computing for at least a decade. By giving computers the ability to understand speech, humankind would marry its two greatest technologies: language and toolmaking. To believers, this union can only be a matter of time.

Forty years after “2001,” how close are we to talking to computers? Today, you can use your voice to buy airplane tickets, transfer money, and get a prescription filled. If you don’t want to type, you can use one of the current crop of dictation programs to transcribe your speech; these have been improving steadily and now work reasonably well. If you are driving a car with an onboard navigator, you can get directions in one of dozens of different voices, according to your preference. In a car equipped with Sync—a collaboration of Ford, Microsoft, and Nuance, the largest speech-technology company in the world—you can use your voice to place a phone call or to control your iPod, both of which are useful when you are in what’s known in the speech-recognition industry as “hands-busy, eyes-busy” situations. State-of-the-art I.V.R.s, such as Google’s voice-based 411 service, offer natural-language understanding—you can speak almost as you would to a human operator, as opposed to having to choose from a set menu of options. I.V.R. designers create vocal personas like Julie, the perky voice that answers Amtrak’s 800 number; these voices can be “tuned” according to a company’s branding needs. Calling Virgin Mobile gets you a sassy-voiced young woman, who sounds as if she’s got her feet up on her desk.

Still, these applications of speech technology, useful though they can be, are a far cry from HAL—a conversational computer. Computers still flunk the famous Turing Test, devised by the British mathematician Alan Turing, in which a computer tries to fool a person into thinking that it’s human. And, even within limited applications, speech recognition never seems to work as well as it should. North Americans spent forty-three billion minutes on the line with an I.V.R. in 2007; according to one study, only one caller in ten was satisfied with the experience. Some companies have decided to switch back to touch-tone menus, after finding that customers prefer pushing buttons to using their voices, especially when they are inputting private information, such as account numbers. Leopard, Apple’s new operating system for the Mac, responds to voice commands, which is wonderful for people with handicaps and disabilities but extremely annoying if you have to listen to Alex, its computer-generated voice, converse with a co-worker all day.

Roger Schank was a twenty-two-year-old graduate student when “2001” was released. He came toward the end of what today appears to have been a golden era of programmer-philosophers—men like Marvin Minsky and Seymour Papert, who, in establishing the field of artificial intelligence, inspired researchers to create machines with human intelligence. Schank has spent his career trying to make computers simulate human memory and learning. When he was young, he was certain that a conversational computer would eventually be invented. Today, he’s less sure. What changed his thinking? Two things, Schank told me: “One was realizing that a lot of human speech is just chatting.” Computers proved to be very good at tasks that humans find difficult, like calculating large sums quickly and beating grand masters at chess, but they were wretched at this, one of the simplest of human activities. The other reason, as Schank explained, was that “we just didn’t know how complicated speech was until we tried to model it.” Just as sending men to the moon yielded many fundamental insights into the nature of space, so the problem of making conversational machines has taught scientists a great deal about how we hear and speak. As the Harvard cognitive scientist Steven Pinker wrote to me, “The consensus as far as I have experienced it among A.I. researchers is that natural-language processing is extraordinarily difficult, as it could involve the entirety of a person’s knowledge, which of course is extraordinarily difficult to model on a computer.” After fifty years of research, we aren’t even close.

Speech begins with a puff of breath. The diaphragm pushes air up from the lungs, and this passes between two small membranes in the upper windpipe, known as the vocal folds, which vibrate and transform the breath into sound waves. The waves strike hard surfaces inside the head—teeth, bone, the palate. By changing the shape of the mouth and the position of the tongue, the speaker makes vowels and consonants and gives timbre, tone, and color to the sound.

That process, being mechanical, is not difficult to model, and, indeed, humans had been trying to make talking machines long before A.I. existed. In the late eighteenth century, a Hungarian inventor named Wolfgang von Kempelen built a speaking machine by modelling the human vocal tract, using a bellows for lungs, a reed from a bagpipe for the vocal folds, and a keyboard to manipulate the “mouth.” By playing the keys, an operator could form complete phrases in several different languages. In the nineteenth century, Kempelen’s machine was improved on by Sir Charles Wheatstone, and that contraption, which was exhibited in London, was seen by the young Alexander Graham Bell. It inspired him to try to create his own devices, in the hope of allowing non-hearing people (Bell’s mother and his wife were deaf) to speak normally. He didn’t succeed, but his early efforts led to the invention of the telephone.

In the twentieth century, researchers created electronic talking machines. The first, called the Voder, was engineered by Bell Labs—the famed research division of A.T. & T.—and exhibited at the 1939 World’s Fair, in New York. Instead of a mechanical system made of a reed and bellows, the Voder generated sounds with electricity; as with Kempelen’s speaking machine, a human manipulated keys to produce words. The mechanical-sounding voice became a familiar attribute of movie robots in the nineteen-fifties (and, later, similar synthetic-voice effects were a staple of nineteen-seventies progressive rock). In the early sixties, Bell Labs programmed a computer to sing “Daisy, Daisy, give me your answer do.” Arthur C. Clarke, who visited the lab, heard the machine sing, and he and Kubrick subsequently used the same song in HAL’s death scene.

Hearing is more complicated to model than talking, because it involves signal processing: converting sound from waves of air into electrical impulses. The fleshy part of the ear and the ear canal capture sound waves and direct them to the eardrum, which vibrates as it is struck. These vibrations then push on the ossicles, which form a three-boned lever—that Rube Goldbergian contraption of the middle ear—that helps amplify the sound. The impulses pass into the fluid of the cochlea, which is lined with tiny hairs called cilia. They translate the impulses into electrical signals, which then travel along neural pathways to the brain. Once signals reach the brain, they are “recognized,” either by associative memories or by a rules-based system—or, as Pinker has argued, by some combination of the two.

The human ear is exquisitely sensitive; research has shown, for example, that people can distinguish between hot and cold coffee simply by hearing it poured. The ear is particularly attentive to the human voice. We can differentiate among different voices speaking together, and we can isolate voices in the midst of traffic and loud music, and we can tell the direction from which a voice is coming—all of which are difficult for computers to do. We can hear smiles at the other end of a telephone call; the ear recognizes the sound variations caused by the spreading of the lips. That’s why call-center workers are told to smile no matter what kind of abuse they’re taking.

The first attempts at speech recognition were made in the nineteen-fifties and sixties, when the A.I. pioneers tried to simulate the way the human mind apprehends language. But where do you start? Even a simple concept like “yes” might be expressed in dozens of different ways—including “yes,” “ya,” “yup,” “yeah,” “yeayuh,” “yeppers,” “yessirree,” “aye, aye,” “mmmhmm,” “uh-huh,” “sure,” “totally,” “certainly,” “indeed,” “affirmative,” “fine,” “definitely,” “you bet,” “you betcha,” “no problemo,” and “okeydoke”—and what’s the rule in that? At Nuance, whose headquarters are outside Boston, speech engineers try to anticipate all the different ways people might say yes, but they still get surprised. For example, designers found that Southerners had more trouble using the system than Northerners did, because when instructed to answer “yes” or “no” Southerners regularly added “ma’am” or “sir,” depending on the I.V.R.’s gender, and the computer wasn’t programmed to recognize that. Also, language isn’t static; the rules change. Researchers taught machines that when the pitch of a voice rises at the end of a sentence it usually means a question, only to have their work spoiled by the emergence of what linguists call “uptalk”—that Valley Girl way of making a declarative sentence sound like a question?—which is now ubiquitous across the United States.

In the seventies and eighties, many speech researchers gradually moved away from efforts to determine the rules of language and took a probabilistic approach to speech recognition. Statistical “learning algorithms”—methods of constructing models from streams of data—were the wheel on which the back of the A.I. culture was broken. As David Nahamoo, the chief technology officer for speech at I.B.M.’s Thomas J. Watson Research Center, told me, “Brute-force computing, based on probability algorithms, won out over the rule-based approach.” A speech recognizer, by learning the relative frequency with which particular words occur, both by themselves and within the context of other words, could be “trained” to make educated guesses. Such a system wouldn’t be able to understand what words mean, but, given enough data and computing power, it might work in certain, limited vocabulary situations, like medical transcription, and it might be able to perform machine translation with a high degree of accuracy.

In 1969, John Pierce, a prominent member of the staff of Bell Labs, argued in an influential letter to the Journal of the Acoustical Society of America, entitled “Whither Speech Recognition,” that there was little point in making machines that had speech recognition but no speech understanding. Regardless of the sophistication of the algorithms, the machine would still be a modern version of Kempelen’s talking head—a gimmick. But the majority of researchers felt that the narrow promise of speech recognition was better than nothing.

In 1971, the Defense Department’s Advanced Research Projects Agency made a five-year commitment to funding speech recognition. Four institutions—B.B.N., I.B.M., Stanford Research Institute, and Carnegie Mellon University—were selected as contractors, and each was given the same guidelines for developing a speech recognizer with a thousand-word vocabulary. Subsequently, additional projects were funded that might be useful to the military. One was straight out of “Star Trek”: a handheld device that could automatically translate spoken words into other languages. Another was software that could read foreign news media and render them into English.

In addition to DARPA, funding for speech recognition came from telephone companies—principally at Bell Labs—and computer companies, most notably I.B.M. The phone companies wanted voice-based automated calling, and the computer companies wanted a voice-based computer interface and automated dictation, which was a “holy grail project” (a favorite phrase of the industry). But devising a speech recognizer that worked consistently and accurately in real-world situations proved to be much harder than anyone had anticipated. It wasn’t until the early nineties that companies finally began to bring products to the consumer marketplace, but these products rarely worked as advertised. The fledgling industry went through a tumultuous period. One industry leader, Lernout & Hauspie, flamed out, in a spectacular accounting scandal.

Whether its provenance is academic or corporate, speech-recognition research is heavily dependent on the size of the data sample, or “corpus”—the sheer volume of speech you work with. The larger your corpus, the more data you can feed to the learning algorithms and the better the guesses they can make. I.B.M. collects speech not only in the lab and from broadcasts but also in the field. Andy Aaron, who works at the Watson Research Center, has spent many hours recording people driving or sitting in the front seats of cars in an effort to develop accurate speech models for automotive commands. That’s because, he told me, “when people speak in cars they don’t speak the same way they do in an office.” For example, we talk more loudly in cars, because of a phenomenon known as the Lombard effect—the speaker involuntarily raises his voice to compensate for background noise. Aaron collects speech both for recognizers and for synthesizers—computer-generated voices. “Recording for the recognizer and for the synthesizer couldn’t be more different,” he said. “In the case of the recognizer, you are teaching the system to correctly identify an unknown speech sound. So you feed it lots and lots of different samples, so that it knows all the different ways Americans might say the phoneme ‘oo.’ A synthesizer is the opposite. You audition many professional speakers and carefully choose one, because you like the sound of his voice. Then you record that speaker for dozens of hours, saying sentences that contain many diverse combinations of phonemes and common words.”

B.B.N. came to speech recognition through its origins as an acousticalengineering firm. It worked on the design of Lincoln Center’s Philharmonic Hall in the mid-sixties, and did early research in measuring noise levels at airports, which led to quieter airplane engines. In 1997, B.B.N. was bought by G.T.E., which subsequently merged with Bell Atlantic to form Verizon. In 2004, a group of B.B.N. executives and investors put together a buyout, and the company became independent again. The speech they use to train their recognizers comes from a shared bank, the Linguistic Data Consortium.

During my visit to Cambridge, I watched as a speech engine transcribed a live Al Jazeera broadcast into more or less readable English text, with only a three-minute lag time. In another demo, software captured speech from podcasts and YouTube videos and converted it into text, with impressive accuracy—a technology that promises to make video and audio as easily searchable as text. Both technologies are now available commercially, in B.B.N.’s Broadcast Monitoring System and in EveryZing, its audio-and-video search engine. I also saw B.B.N’s English-to-Iraqi Arabic translator; I had seen I.B.M.’s, known as the Multilingual Automatic Speech-to-Speech Translator, or MASTOR, the week before. Both worked amazingly well. At I.B.M., an English speaker made a comment (“We are here to provide humanitarian assistance for your town”) to an Iraqi. The machine repeated his sentence in English, to make sure it was understood. The MASTOR then translated the sentence into Arabic and said it out loud. The Iraqi answered in Arabic; the machine repeated the sentence in Arabic and then delivered it in English. The entire exchange took about five seconds, and combined state-of-the-art speech recognition, voice synthesis, and machine translation. Granted, the conversation was limited to what you might discuss at a checkpoint in Iraq. Still, for what they are, these translators are triumphs of the statistics-based approach.

What’s missing from all these programs, however, is emotional recognition. The current technology can capture neither the play of emphasis, rhythm, and intonation in spoken language (which linguists call prosody) nor the emotional experience of speaking and understanding language. Descartes favored a division between reason and emotion, and considered language to be a vehicle of the former. But speech without emotion, it turns out, isn’t really speech. Cognitively, the words should mean the same thing, regardless of their emotional content. But they don’t.

Speech recognition is a multidisciplinary field, involving linguists, psychologists, phoneticians, acousticians, computer scientists, and engineers. At speech conferences these days, emotional recognition is a hot topic. Julia Hirschberg, a professor of computer science at Columbia University, told me that at the last prosody conference she attended “it seemed like three-quarters of the presentations were on emotional recognition.” Research is focussed both on how to recognize a speaker’s emotional state and on how to make synthetic voices more emotionally expressive.

Elizabeth Shriberg, a senior researcher in the speech group at S.R.I. International (formerly Stanford Research Institute), said, “Especially when you talk about emotional speech, there is a big difference between acted speech and real speech.” Real anger, she went on, often builds over a number of utterances, and is much more variable than acted anger. For more accurate emotional recognition, Shriberg said, “we need the kind of data that you get from 911 and directory-assistance calls. But you can’t use those, for privacy reasons, and because they’re proprietary.”

At SAIL—the Speech Analysis and Interpretation Laboratory, on the campus of the University of Southern California, in Los Angeles—researchers work mostly with scripted speech, which students collect from actors in the U.S.C. film and drama programs. Shrikanth Narayanan, who runs the lab, is an electrical engineer, and the students in his emotion-research group are mainly engineers and computer scientists. One student was studying what happens when a speaker’s face and voice convey conflicting emotions. Another was researching how emotional states affect the way people move their heads when they talk. The research itself can be a grind. Students painstakingly listen to voices expressing many different kinds of emotion and tag each sample with information, such as how energetic the voice is and its “valence” (whether it is a negative or a positive emotion). Anger and elation are examples of emotions that have different valences but similar energy; humans use context, as well as facial and vocal cues, to distinguish them. Since the researchers have only the voice to work with, at least three of them are required to listen and decide what the emotion is. Students note voice quality, pacing, language, “disfluencies” (false starts, “um”s), and pitch. They make at least two different data sets, so that they can use separate ones for training the computer and for testing it.

Facial expressions are generally thought to be universal, but so far Narayanan’s lab hasn’t found that similarly universal vocal cues for emotions are as clearly established. “Emotions aren’t discrete,” Narayanan said. “They are a continuum, and it isn’t clear to any one perceiver where one emotion ends and another begins, so you end up studying not just the speaker but the perceiver.” The idea is that if you could train the computer to sense a speaker’s emotional state by the sound of his voice, you could also train it to respond in kind—the computer might slow down if it sensed that the speaker was confused, or assume a more soothing tone of voice if it sensed anger. One possible application of such technology would be video games, which could automatically adapt to a player’s level based on the stress in his voice. Narayanan also mentioned simulations—such as the computer-game-like training exercises that many companies now use to prepare workers for a job. “The program would sense from your voice if you are overconfident, or when you are feeling frustrated, and adjust accordingly,” he said. That reminded me of the moment in the novel “2001” when HAL, after discovering that the astronauts have doubts about him, decides to kill them. While struggling with one of the astronauts, Dave, for control of the ship, HAL says, “I can tell from your voice harmonics, Dave, that you’re badly upset. Why don’t you take a stress pill and get some rest?”

But, apart from call-center voice analytics, it’s hard to find many credible applications of emotional recognition, and it is possible that true emotional recognition is beyond the limits of the probabilistic approach. There are futuristic projects aimed at making emotionally responsive robots, and there are plans to use such robots in the care of children and the elderly. “But this is very long-range, obviously,” Narayanan said. In the meantime, we are going to be dealing with emotionless machines.

There is a small market for voice-based lie detectors, which are becoming a popular tool in police stations around the country. Many are made by Nemesysco, an Israeli company, using a technique called “layered voice analysis” to analyze some hundred and thirty parameters in the voice to establish the speaker’s pyschological state. The academic world is skeptical of voice-based lie detection, because Nemesysco will not release the algorithms on which its program is based; after all, they are proprietary. Layered voice analysis has failed in two independent tests. Nemesysco’s American distributor says that’s because the tests were poorly designed. (The company played Roger Clemens’s recent congressional testimony for me through its software, so that I could see for myself the Rocket’s stress levels leaping.) Nevertheless, according to the distributor more than a thousand copies of the software have been sold—at fourteen thousand five hundred dollars each—to law-enforcement agencies and, more recently, to insurance companies, which are using them in fraud detection.

One of the most fully realized applications of emotional recognition that I am aware of is the aggression-detection system developed by Sound Intelligence, which has been deployed in Rotterdam and Amsterdam, and other cities in the Netherlands. It has also been installed in the English city of Coventry, and is being tested in London and Manchester. One of the designers, Peter van Hengel, explained to me that the idea grew out of a project at the University of Groningen, which simulated the workings of the inner ear with computer models. “A colleague of mine applied the same inner-ear model to trying to recognize speech amid noise,” he said, “and found that it could be used to select the parts belonging to the speech and leave out the noise.” They founded Sound Intelligence in 2000, initially focussing on speech-noise separation for automatic speech recognition, with a sideline in the analysis of non-speech sounds. In 2003, the company was approached by the Dutch national railroad, which wanted to be able to detect several kinds of sound that might indicate trouble in stations and on trains (glass-breaking, graffiti-spraying, and aggressive voices). This project developed into an aggression-detection system based on the sound of people shouting: the machine detects the overstressing of the vocal cords, which occurs only in real aggression. (That’s one reason actors only approximate anger; the real thing can damage the voice.)

The city of Groningen has installed an aggression-detector at a busy intersection in an area full of pubs. Elevated microphones spaced thirty metres apart run along both sides of the street, joining an existing network of cameras. These connect to a computer at the police station in Groningen. If the system hears certain sound patterns that correspond with aggression, it sends an alert to the police station, where the police can assess the situation by examining closed-circuit monitors: if necessary, officers are dispatched to the scene. This is no HAL, either, but the system is promising, because it does not pretend to be more intelligent than it is.

I thought the problem with the technology would be false positives—too many loud noises that the machine mistook for aggression. But in Groningen, at least, the problem has been just the opposite. “Groningen is the safest city in Holland,” van Hengel said, ruefully. “There is virtually no crime. We don’t have enough aggression to train the system properly.” ♦

A importância da imaginação pós-capitalista (Envolverde)

Economia
27/8/2013 – 11h47

por Ronan Burtenshaw e Aubrey Robinson, do The Irish Left Review*

david harvey 250 A importância da imaginação pós capitalista

David Harvey mergulha no estudo das contradições do sistema e busca alternativas: desmercantilização, propriedade comum, renda básica permanente, gratuidades… Foto: Divulgação/ Internet

Mês que vem completam-se cinco anos que Lehman Brothers foram protagonistas do maior caso de falência de banco na história dos EUA. O colapso sinalizou o início da Grande Depressão – a crise mais substancial do capitalismo mundial desde a 2ª Guerra Mundial. Como entender os fundamentos desse sistema agora em crise? E, com o sistema em guerra contra a classe trabalhadora, sob o disfarce da “austeridade”, como imaginar um mundo depois disso?

Poucos pensadores geraram respostas mais influentes para essas perguntas que o geógrafo marxista David Harvey. Aqui, em entrevista recente, ele fala a Ronan Burtenshaw e Aubrey Robinson sobre esses problemas.

The Irish Left Review – Você está trabalhando agora num novo livro, The Seventeen Contradictions of Capitalism [As 17 contradições do capitalismo]. Por que focar essas contradições?

David Harvey – A análise do capitalismo sugere que são contradições significativas e fundamentais. Periodicamente essas contradições saem de controle e geram uma crise. Acabamos de passar por uma crise e acho importante perguntar que contradições nos levaram à crise? Como podemos analisar a crise em termos de contradições? Uma das grandes ditos de Marx foi que uma crise é sempre resultado das contradições subjacentes. Portanto, temos de lidar com elas próprias, não com os resultados delas.

TILR – Uma das contradições a que você se dedica é a que há entre o valor de uso e o valor de troca de uma mercadoria. Por que essa contradição é tão fundamental para o capitalismo e por que você usa a moradia para ilustrá-la?

DH – Temos de começar por entender que todas as mercadorias têm um valor de uso e um valor de troca. Se tenho um bife, o valor de uso é que posso comê-lo, e o valor de troca é quanto tenho de pagar para comê-lo.

A moradia é muito interessante, nesse sentido, porque se pode entender como valor de uso que ela garante abrigo, privacidade, um mundo de relações afetivas entre pessoas, uma lista enorme de coisas para as quais usamos a casa. Houve tempo em que cada um construía a própria casa e a casa não tinha valor de troca. Depois, do século 18 em diante, aparece a construção de casas para especulação – construíam-se sobrados georgianos [reinado do rei George, na Inglaterra] para serem vendidos. E as casas passaram a ser valores de troca para consumidores, como poupança. Se compro uma casa e pago a hipoteca, acabo proprietário da casa. Tenho pois um bem, um patrimônio. Assim se gera uma política curiosa – “não no meu quintal”, “não quero ter gente na porta ao lado que não se pareça comigo”. E começa a segregação nos mercados imobiliários, porque as pessoas querem proteger o valor de troca dos seus bens.

Então, há cerca de 30 anos, as pessoas começaram a usar a moradia como forma de obter ganhos de especulação. Você podia comprar uma casa e “passar adiante” – compra uma casa por £200 mil, depois de um ano consegue £250 mil por ela. Você ganha £50 mil, por que não? O valor de troca passou a ser dominante. E assim se chega ao boom especulativo. Em 2000, depois do colapso dos mercados globais de ações, o excesso de capital passou a fluir para a moradia. É um tipo interessante de mercado. Você compra uma casa, o preço da moradia sobe você diz “os preços das casas estão subindo, tenho de comprar uma casa”, mas outro compra antes de você. Gera-se uma bolha imobiliária. As pessoas ficam presas na bolha e a bolha explode. Então, de repente, muitas pessoas descobrem que já não podem usufruir do valor de uso da moradia, porque o sistema do valor de troca destruiu o valor de uso.

E surge a pergunta: é boa ideia permitir que o valor de uso da moradia, que é crucial para o povo, seja comandado por um sistema louco de valor de troca? O problema não surge só na moradia, mas em coisas como educação e atenção à saúde. Em vários desses campos, liberamos a dinâmica do valor de troca, sob a teoria de que ele garantirá o valor de uso, mas o que se vê frequentemente, é que ele faz explodir o valor de uso e as pessoas acabam sem receber boa atenção à saúde, boa educação e boa moradia. Por isso me parece tão importante prestar atenção à diferença entre valor de uso e valor de troca.

TILR – Outra contradição que você comenta envolve um processo de alternar, ao longo do tempo, entre a ênfase na oferta, na produção, e ênfase na demanda, pelo consumo, que se vê no capitalismo. Pode falar sobre como esse processo apareceu no século 20 e por que é tão importante?

DH – Uma grande questão é manter uma demanda adequada de mercado, de modo que seja possível absorver o que for que o capital esteja produzindo. Outra, é criar as condições sob as quais o capital possa produzir com lucros.

Essas condições de produção lucrativa quase sempre significam suprimir a força de trabalho. Na medida em que se reduzem salários – pagando salários cada vez menores –, as taxas de lucro sobem. Portanto, do lado da produção, quanto mais arrochados os salários, melhor. Os lucros aumentam. Mas surge o problema: quem comprará o que é produzido? Com o trabalho arrochado, onde fica o mercado? Se o arrocho é excessivo, sobrevém uma crise, porque deixa de haver demanda suficiente que absorva o produto.

A certa altura, a interpretação generalizada dizia que o problema, na crise dos anos 1930s foi falta de demanda. Houve então uma mudança na direção de investimentos conduzidos pelo Estado, para construir novas estradas, o WPA [serviços públicos, sob o New Deal] e tudo aquilo. Diziam que “revitalizaremos a economia” com demanda financiada por dívidas e, ao fazer isso, viraram-se para a teoria Keynesiana. Saiu-se dos anos 1930s com uma nova e forte capacidade para gerenciar a demanda, com o Estado muito envolvido na economia. Resultado disso, houve fortes taxas de crescimento, mas as fortes taxas de crescimento vieram acompanhadas de maior poder para os trabalhadores, com salários crescentes e sindicatos fortes.

Sindicatos fortes e altos salários significam que as taxas de lucro começam a cair. O capital entra em crise, porque não está reprimindo suficientemente os trabalhadores. E o “automático” do sistema dá o alarme. Nos anos 1970s, voltaram-se na direção de Milton Friedman e da Escola de Chicago. Passou a ser dominante na teoria econômica, e as pessoas começaram a observar a ponta da oferta – sobretudo os salários. E veio o arrocho dos salários, que começou nos anos 1970s. Ronald Reagan ataca os controladores de tráfego aéreo; Margaret Thatcher caça os mineiros; Pinochet assassina militantes da esquerda. O trabalho é atacado por todos os lados – e a taxa de lucros sobe. Quando se chega aos anos 1980s, a taxa de lucro dá um salto, porque os salários estão sendo arrochados e o capital está se dando muito bem. Mas surge o problema: a quem vender aquela coisa toda que está sendo produzida.

Nos anos 1990s tudo isso foi recoberto pela economia do endividamento. Começaram a encorajar as pessoas a tomarem empréstimos – começou uma economia de cartão de crédito e uma economia de moradia pesadamente financiada por hipotecas. Assim se mascarou o fato de que, na realidade, não havia demanda alguma. Em 2007-8, esse arranjo também desmoronou.
O capital enfrenta essa pergunta, “trabalha-se pelo lado da oferta ou pelo lado da demanda”? Minha ideia, para um mundo anticapitalista, é que é preciso unificar tudo isso. Temos de voltar ao valor de uso. De que valores de uso as pessoas precisam e como organizar a produção de tal modo que satisfaça à demanda por aqueles valores de uso?

TILR – Hoje, tudo indica que estamos em crise pelo lado da oferta. Mas a austeridade é tentativa de encontrar solução pelo lado da demanda. Como resolver isso?

DH – É preciso diferenciar entre os interesses do capitalismo como um todo e o que é interesse especificamente da classe capitalista, ou de uma parte dela. Durante essa crise, a classe capitalista deu-se muitíssimo bem. Alguns saíram queimados, mas a maior parte saiu-se extremamente bem. Segundo estudo recente, nos países da OECD a desigualdade econômica cresceu significativamente desde o início da crise, o que significa que os benefícios da crise concentraram-se nas classes mais ricas. Em outras palavras, os ricos não querem sair da crise, porque a crise lhes traz muitos lucros.
A população como um todo está sofrendo, o capitalismo como um todo não está saudável, mas a classe capitalista – sobretudo uma oligarquia que há ali – está muito bem. Há várias situações nas quais capitalistas individuais operando conforme os interesses de sua classe, podem de fato fazer coisas que agridem muito gravemente todo o sistema capitalista. Minha opinião é que, hoje, estamos vivendo uma dessas situações.

TILR – Você tem repetido várias vezes, recentemente, que uma das coisas que a esquerda deveria estar fazendo é usar nossa imaginação pós-capitalista, e começar por perguntar como, afinal, será um mundo pós-capitalista. Por que isso lhe parece tão importante? E, na sua opinião, como, afinal, será um mundo pós-capitalista?

DH – É importante, porque há muito tempo trombeteia-se nos nossos ouvidos que não há alternativa. Uma das primeiras coisas que temos de fazer é pensar a alternativa, para começar a andar na direção de criá-la.

A esquerda tornou-se tão cúmplice com o neoliberalismo, que já não se vê diferença entre os partidos políticos da esquerda e os da direita, se não em questões nacionais ou sociais. Na economia política não há grande diferença. Temos de encontrar uma economia política alternativa ao modo como funciona o capitalismo. E temos alguns princípios. Por isso as contradições são interessantes. Examina-se cada uma delas, por exemplo, a contradição entre valor de uso e valor de troca e se diz – “o mundo alternativo é mundo no qual se fornecem valores de uso”. Assim podemos nos concentrar nos valores de uso e tentar reduzir o papel dos valores de troca.

Ou, na questão monetária – claro que precisamos de dinheiro para que as mercadorias circulem. Mas o problema do dinheiro é que pessoas privadas podem apropriar-se dele. O dinheiro torna-se uma modalidade de poder pessoal e, em seguida, um desejo-fetiche. As pessoas mobilizam a vida na procura por esse dinheiro, até quem não sabe que o faz. Então, temos de mudar o sistema monetário – ou se taxam todas as mais-valias que as pessoas comecem a obter ou criamos um sistema monetário no qual a moeda se dissolve e não pode ser entesourada, como o sistema de milhagem aérea.

Mas para fazer isso, é preciso superar a dicotomia estado/propriedade privada, e propor um regime de propriedade comum. E, num dado momento, é preciso gerar uma renda básica para o povo, porque se você tem uma forma de dinheiro antipoupança é preciso dar garantia às pessoas. Você tem de dizer “você não precisa poupar para os dias de chuva, porque você sempre receberá essa renda básica, não importa o que aconteça”. É preciso dar segurança às pessoas desse modo, não por economias privadas, pessoais.

Mudando cada uma dessas coisas contraditórias chega-se a um tipo diferente de sociedade, que é muito mais racional que a que temos hoje. Hoje, o que acontece é produzimos e, em seguida, tentamos persuadir os consumidores a consumir o que foi produzido, queiram ou não e precisem ou não do que é produzido. Em vez disso, temos de descobrir quais os desejos e vontades básicas das pessoas e mobilizar o sistema de produção para produzir aquilo. Se se elimina a dinâmica do valor de troca, é possível reorganizar todo o sistema de outro modo. Pode-se imaginar a direção na qual se moverá uma alternativa socialista, se nos afastamos da forma dominante da acumulação de capital que hoje comanda tudo.

* Esse é um trecho da entrevista. A íntegra da entrevista será publicada na edição de outono de The Irish Left Review./ Tradução Vila Vudu.

** Publicado originalmente no site Irish Left Review e retirado do site Outras Palavras.

(Outra Palavras)

Language can reveal the invisible, study shows (University of Wisconsin-Madison)

Public release date: 26-Aug-2013

By Gary Lupyan, University of Wisconsin-Madison

MADISON, Wis. — It is natural to imagine that the sense of sight takes in the world as it is — simply passing on what the eyes collect from light reflected by the objects around us.

But the eyes do not work alone. What we see is a function not only of incoming visual information, but also how that information is interpreted in light of other visual experiences, and may even be influenced by language.

Words can play a powerful role in what we see, according to a study published this month by University of Wisconsin–Madison cognitive scientist and psychology professor Gary Lupyan, and Emily Ward, a Yale University graduate student, in the journal Proceedings of the National Academy of Sciences.

“Perceptual systems do the best they can with inherently ambiguous inputs by putting them in context of what we know, what we expect,” Lupyan says. “Studies like this are helping us show that language is a powerful tool for shaping perceptual systems, acting as a top-down signal to perceptual processes. In the case of vision, what we consciously perceive seems to be deeply shaped by our knowledge and expectations.”

And those expectations can be altered with a single word.

To show how deeply words can influence perception, Lupyan and Ward used a technique called continuous flash suppression to render a series of objects invisible for a group of volunteers.

Each person was shown a picture of a familiar object — such as a chair, a pumpkin or a kangaroo — in one eye. At the same time, their other eye saw a series of flashing, “squiggly” lines.

“Essentially, it’s visual noise,” Lupyan says. “Because the noise patterns are high-contrast and constantly moving, they dominate, and the input from the other eye is suppressed.”

Immediately before looking at the combination of the flashing lines and suppressed object, the study participants heard one of three things: the word for the suppressed object (“pumpkin,” when the object was a pumpkin), the word for a different object (“kangaroo,” when the object was actually a pumpkin), or just static.

Then researchers asked the participants to indicate whether they saw something or not. When the word they heard matched the object that was being wiped out by the visual noise, the subjects were more likely to report that they did indeed see something than in cases where the wrong word or no word at all was paired with the image.

“Hearing the word for the object that was being suppressed boosted that object into their vision,” Lupyan says.

And hearing an unmatched word actually hurt study subjects’ chances of seeing an object.

“With the label, you’re expecting pumpkin-shaped things,” Lupyan says. “When you get a visual input consistent with that expectation, it boosts it into perception. When you get an incorrect label, it further suppresses that.”

Experiments have shown that continuous flash suppression interrupts sight so thoroughly that there are no signals in the brain to suggest the invisible objects are perceived, even implicitly.

“Unless they can tell us they saw it, there’s nothing to suggest the brain was taking it in at all,” Lupyan says. “If language affects performance on a test like this, it indicates that language is influencing vision at a pretty early stage. It’s getting really deep into the visual system.”

The study demonstrates a deeper connection between language and simple sensory perception than previously thought, and one that makes Lupyan wonder about the extent of language’s power. The influence of language may extend to other senses as well.

“A lot of previous work has focused on vision, and we have neglected to examine the role of knowledge and expectations on other modalities, especially smell and taste,” Lupyan says. “What I want to see is whether we can really alter threshold abilities,” he says. “Does expecting a particular taste for example, allow you to detect a substance at a lower concentration?”

If you’re drinking a glass of milk, but thinking about orange juice, he says, that may change the way you experience the milk.

“There’s no point in figuring out what some objective taste is,” Lupyan says. “What’s important is whether the milk is spoiled or not. If you expect it to be orange juice, and it tastes like orange juice, it’s fine. But if you expected it to be milk, you’d think something was wrong.”

Crowdsourcing, for the Birds (NY Times)

NY Times, August 19, 2013

By JIM ROBBINS

HELENA, Mont. — On a warm morning not long ago on the shore of a small prairie lake outside this state capital, Bob Martinka trained his spotting scope on a towering cottonwood tree heavy with blue heron nests. He counted a dozen of the tall, graceful birds and got out his smartphone, not to make a call but to type the number of birds and the species into an app that sent the information to researchers in New York.

 

Mapping Bird Species Heat maps show the northward migration of the chimney swift as modeled by the eBird network. Brighter colors indicate higher probabilities of finding the species.

Mr. Martinka, a retired state wildlife biologist and an avid bird-watcher, is part of the global ornithological network eBird. Several times a week he heads into the mountains to scan lakes, grasslands, even the local dump, and then reports his sightings to the Cornell Lab of Ornithology, a nonprofit organization based at Cornell University.

“I see rare gulls at the dump quite frequently,” Mr. Martinka said, scanning a giant mound of bird-covered trash.

Tens of thousands of birders are now what the lab calls “biological sensors,” turning their sightings into digital data by reporting where, when and how many of which species they see. Mr. Martinka’s sighting of a dozen herons is a tiny bit of information, but such bits, gathered in the millions, provide scientists with a very big picture: perhaps the first crowdsourced, real-time view of bird populations around the world.

West Kassel. A western meadowlark.

Birds are notoriously hard to count. While stationary sensors can measure things like carbon dioxide levels and highway traffic, it takes people to note the type and number of birds in an area. Until the advent of eBird, which began collecting daily global data in 2002, so-called one-day counts were the only method.

While counts like the Audubon Christmas Bird Count and the Breeding Bird Survey bring a lot of people together on one day to make bird observations across the country, and are scientifically valuable, they are different because they don’t provide year-round data.

And eBird’s daily view of bird movements has yielded a vast increase in data — and a revelation for scientists. The most informative product is what scientists call a heat map: a striking image of the bird sightings represented in various shades of orange according to their density, moving through space and time across black maps. Now, more than 300 species have a heat map of their own.

“As soon as the heat maps began to come out, everybody recognized this is a game changer in how we look at animal populations and their movement,” said John W. Fitzpatrick, director of the Cornell Lab. “Really captivating imagery teaches us more effectively.”

It was long believed, for example, that the United States had just one population of orchard orioles. Heat maps showed that the sightings were separated by a gap, meaning there are not one but two genetically distinct populations.

Moreover, the network offers a powerful way to capture data that was lost in the old days. “People for generations have been accumulating an enormous amount of information about where birds are and have been,” Dr. Fitzpatrick said. “Then it got burned when they died.”

No longer: eBird has compiled 141 million reports, or bits, and the number is increasing by 40 percent a year. In May, eBird gathered a record 5.6 million new observations from 169 countries. (Mr. Martinka’s sighting of 12 herons at once, for example, is considered one species observation, or bit.)

The system also offers incentives for birders to stay involved, with apps that enable them to keep their life lists (records of the species they have seen), compare their sightings with those of friends (and rivals), and know where to look for birds they haven’t seen before.

“When you get off the plane and turn your phone on,” Dr. Fitzpatrick said, “you can find out what has been seen near you over the last seven days and ask it to filter out the birds you haven’t seen yet, so with a quick look you can add to your life list.”

The system is not without problems. Citizen scientists may not be as precise in reporting data as experienced researchers are, like the ones in the Breeding Bird Survey. Cornell has tried to solve that problem by hiring top birders to travel around the world to train people like Mr. Martinka in methodology. And 500 volunteer experts read the submissions for accuracy, rejecting about 2 percent. Rare-bird sightings get special scrutiny.

The engine that makes eBird data usable is machine learning, or artificial intelligence — a combination of software and hardware that sorts through disparities, gaps and flaws in data collection, improving as it goes along.

“Machine learning says, ‘I know these data are sloppy, but fortunately there’s a lot of it,’ ” Dr. Fitzpatrick said. “It takes chunks of these data and sorts through to find patterns in the noise. These programs are learning as they go, testing and refining and getting better and better.”

Still, some experts question eBird’s validity. John Sauer, a wildlife biologist with the United States Geological Survey, says that bird-watchers’ reports lack scientific rigor. Rather than randomness, he said, “you get a lot of observations from where people like to go.” And he doubts that Cornell has proved the reliability of its machine learning efforts.

Still, the information has promise, he said, “and it’s played a powerful role in coordinating birders for recording observations, and encouraging bird-watching.”

And the data are being used by a wide array of researchers and conservationists.

Cagan H. Sekercioglu, a professor of ornithology at the University of Utah who has used similar bird-watching data in his native Turkey to study the effects of climate change on birds, called eBird “a phenomenal resource” and said that it was “getting young people involved in natural history, which might seem slow and old-fashioned in the age of instant online gratification.”

Data about bird populations can help scientists understand other changes in the natural world and be a marker for the health of overall biodiversity. “Birds are great indicators because they occur in all environments,” said Steve Kelling, the director of information science at the Cornell bird lab.

A decline in Eastern meadowlarks in part of New York State, for example, suggests that their habitat is shrinking — bad news for other species that depend on the same habitat. In California, eBird data is being used by some planners to decide where cities and towns should steer development.

The data is also being combined with radar and weather data by BirdCast, another Cornell bird lab project that forecasts migration patterns with the aim of protecting birds as they move through a gantlet of threats. “We can predict migration events that would be usable for the timing of wind generation facilities to be turned off at night,” Dr. Fitzpatrick said.

In California, biologists use the migration data to track waterfowl at critical times. When the birds are headed through the Central Valley, for example, they can ask rice farmers to flood their fields to create an improvised wetland habitat before the birds arrive. “The resolution is at such a level of detail they can make estimates of where species occur almost at a field-by-field level,” Mr. Kelling said.

EBird data has been used in Britain, too, combined with that of a similar program called BirdTrack, which uses radar images, weather models and even data from microphones on top of buildings to record the sounds of migrating birds at night.

And for bird-watchers, the eBird project has given their pastime a new sense of purpose. “It’s a really neat tool,” Mr. Martinka said. “If you see one bird or a thousand, it’s significant.”

Climate Panel Cites Near Certainty on Warming (N.Y.Times)

Tim Wimborne/Reuters. A new report from the Intergovernmental Panel on Climate Change states that the authors are now 95 percent to 100 percent confident that human activity is the primary influence on planetary warming.

By 

Published: August 19, 2013

An international panel of scientists has found with near certainty that human activity is the cause of most of the temperature increases of recent decades, and warns that sea levels could conceivably rise by more than three feet by the end of the century if emissions continue at a runaway pace.

The level of carbon dioxide, the main greenhouse gas, is up 41 percent since the Industrial Revolution. Emissions from facilities like coal-fired power plants contribute.

The scientists, whose findings are reported in a draft summary of the next big United Nations climate report, largely dismiss a recent slowdown in the pace of warming, which is often cited by climate change doubters, attributing it most likely to short-term factors.

The report emphasizes that the basic facts about future climate change are more established than ever, justifying the rise in global concern. It also reiterates that the consequences of escalating emissions are likely to be profound.

“It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010,” the draft report says. “There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level and changed some climate extremes in the second half of the 20th century.”

The draft comes from the Intergovernmental Panel on Climate Change, a body of several hundred scientists that won the Nobel Peace Prize in 2007, along with Al Gore. Its summaries, published every five or six years, are considered the definitive assessment of the risks of climate change, and they influence the actions of governments around the world. Hundreds of billions of dollars are being spent on efforts to reduce greenhouse emissions, for instance, largely on the basis of the group’s findings.

The coming report will be the fifth major assessment from the group, created in 1988. Each report has found greater certainty that the planet is warming and greater likelihood that humans are the primary cause.

The 2007 report found “unequivocal” evidence of warming, but hedged a little on responsibility, saying the chances were at least 90 percent that human activities were the cause. The language in the new draft is stronger, saying the odds are at least 95 percent that humans are the principal cause.

On sea level, which is one of the biggest single worries about climate change, the new report goes well beyond the assessment published in 2007, which largely sidestepped the question of how much the ocean could rise this century.

The new report also reiterates a core difficulty that has plagued climate science for decades: While averages for such measures as temperature can be predicted with some confidence on a global scale, the coming changes still cannot be forecast reliably on a local scale. That leaves governments and businesses fumbling in the dark as they try to plan ahead.

On another closely watched issue, the scientists retreated slightly from their 2007 position.

Regarding the question of how much the planet could warm if carbon dioxide levels in the atmosphere doubled, the previous report largely ruled out any number below 3.6 degrees Fahrenheit. The new draft says the rise could be as low as 2.7 degrees, essentially restoring a scientific consensus that prevailed from 1979 to 2007.

But the draft says only that the low number is possible, not that it is likely. Many climate scientists see only a remote chance that the warming will be that low, with the published evidence suggesting that an increase above 5 degrees Fahrenheit is more likely if carbon dioxide doubles.

The level of carbon dioxide, the main greenhouse gas, is up 41 percent since the Industrial Revolution, and if present trends continue it could double in a matter of decades.

Warming the entire planet by 5 degrees Fahrenheit would add a stupendous amount of energy to the climate system. Scientists say the increase would be greater over land and might exceed 10 degrees at the poles.

They add that such an increase would lead to widespread melting of land ice, extreme heat waves, difficulty growing food and massive changes in plant and animal life, probably including a wave of extinctions.

The new document is not final and will not become so until an intensive, closed-door negotiating session among scientists and government leaders in Stockholm in late September. But if the past is any guide, most of the core findings of the document will survive that final review.

The document was leaked over the weekend after it was sent to a large group of people who had signed up to review it. It was first reported on in detail by the Reuters news agency, and The New York Times obtained a copy independently to verify its contents.

The Intergovernmental Panel on Climate Change does no original research, but instead periodically assesses and summarizes the published scientific literature on climate change.

The draft document “is likely to change in response to comments from governments received in recent weeks and will also be considered by governments and scientists at a four-day approval session at the end of September,” the panel’s spokesman, Jonathan Lynn, said in a statement Monday. “It is therefore premature and could be misleading to attempt to draw conclusions from it.”

After winning the Nobel Peace Prize six years ago, the group became a political target for climate doubters, who helped identify minor errors in the 2007 report. This time, the panel adopted rigorous procedures in the hope of preventing such mistakes.

Some climate doubters challenge the idea that the earth is warming at all; others concede that it is, but deny human responsibility; still others acknowledge a human role, but assert that the warming is likely to be limited and the impacts manageable. Every major scientific academy in the world has warned that global warming is a serious problem.

The panel shifted to a wider range for the potential warming, dropping the plausible low end to 2.7 degrees, after a wave of recent studies saying higher estimates were unlikely. But those studies are contested, and scientists at Stockholm are likely to debate whether to stick with that language.

Michael E. Mann, a climate scientist at Pennsylvania State University, said he feared the intergovernmental panel, in writing its draft, had been influenced by criticism from climate doubters, who advocate even lower numbers. “I think the I.P.C.C. on this point has once again erred on the side of understating the degree of the likely changes,” Dr. Mann said.

However, Christopher B. Field, a researcher at the Carnegie Institution for Science who serves on the panel but was not directly involved in the new draft, said the group had to reflect the full range of plausible scientific views.

“I think that the I.P.C.C. has a tradition of being very conservative,” Dr. Field said. “They really want the story to be right.”

Regarding the likely rise in sea level over the coming century, the new report lays out several possibilities. In the most optimistic, the world’s governments would prove far more successful at getting emissions under control than they have been in the recent past, helping to limit the total warming.

In that circumstance, sea level could be expected to rise as little as 10 inches by the end of the century, the report found. That is a bit more than the eight-inch increase in the 20th century, which proved manageable even though it caused severe erosion along the world’s shorelines.

At the other extreme, the report considers a chain of events in which emissions continue to increase at a swift pace. Under those conditions, sea level could be expected to rise at least 21 inches by 2100 and might increase a bit more than three feet, the draft report said.

Hundreds of millions of people live near sea level, and either figure would represent a challenge for humanity, scientists say. But a three-foot rise in particular would endanger many of the world’s great cities — among them New York; London; Shanghai; Venice; Sydney, Australia; Miami; and New Orleans.

A version of this article appears in print on August 20, 2013, on page A1 of the New York edition with the headline: Climate Panel Cites Near Certainty on Warming.

Global warming survey shows support for civil disobedience (Climate Connections)

20 August, 2013. Source: Climate Nexus

Photo: David Suzuki Foundation

Photo: David Suzuki Foundation

national survey finds that many Americans (24%) would support an organization that engaged in non-violent civil disobedience against corporate or government activities that make global warming worse.

Moreover, 13% say they would be willing to personally engage in non-violent civil disobedience for the same reason.

“Many Americans want action on climate change by government, business, and each other,” said lead researcher Anthony Leiserowitz, PhD, of Yale University. “The fact that so many Americans would support organizations engaging in civil disobedience to stop global warming  – or would be willing to do so personally – is a sign that many see climate change as a clear and present danger and are frustrated with the slow pace of action.”

Another key finding of the survey is that, in the past year, Americans were more likely to discuss global warming with family and friends (33% did so often or occasionally) than to communicate about it using social media (e.g., 7% shared something about global warming on Facebook or Twitter, 6% posted a comment online in response to a news story or blog about the topic, etc.).

“Our findings are in line with other research demonstrating that person-to-person conversations – about a wide variety of topics, not just global warming – are still the most common form of communication,” said Dr. Leiserowitz. “The notion that social media have completely ‘taken over’ most of our social interactions is incorrect. For example, we find that Americans are much more likely to talk about extreme weather face-to-face or over the phone than through social media.”

Furthermore, Americans are most likely to identify their own friends and family, such as a significant other (27%), son or daughter (21%), or close friend (17%), as the people who could motivate them to take action to reduce global warming.

“Our findings show that people are most willing to listen to those personally close to them when it comes to taking action against global warming,” said researcher Ed Maibach, PhD, of George Mason University. “In fact, if someone they ‘like and respect’ asks them to take action about global warming, a third say they would attend a public meeting about global warming or sign a pledge to vote only for political candidates that share their views about global warming, among other things.”

These findings come from a nationally representative survey – Climate Change in the American Mind – conducted by the Yale Project on Climate Change Communication and the George Mason University Center for Climate Change Communication.

How many uncontacted tribes are left in the world? (New Scientist)

16:53 22 August 2013 by Bob Holmes

News emerged this week that an indigenous tribe in the Peruvian Amazon, the Mashco-Piro, has been trying to make contact with outsiders. In the past, the Mashco-Piro have always resisted interaction with strangers, avoiding – and sometimes killing – any they encounter. How should Western societies respond to these so-called uncontacted tribes? New Scientist looks at the issue.

How many uncontacted tribes are still left?
No one knows for sure. At a rough guess, there are probably more than 100 around the world, mostly in Amazonia and New Guinea, says Rebecca Spooner, of Survival International, a London-based organisation that advocates for the rights of indigenous peoples. Brazil’s count is likely to be the most accurate. The government there has identified 77 uncontacted tribes through aerial surveys, and by talking to more Westernised indigenous groups about their neighbours.

There are thought to be around 15 uncontacted tribes in Peru, a handful in other Amazonian countries, a few dozen in the Indonesian part of the island of New Guinea and two tribes in the Andaman Islands off the coast of India. There may also be some in Malaysia and central Africa.

Have they really had no contact with the outside world? 
Most have had a little, at least indirectly. “There’s always some contact with other isolated tribes, which have contact with other indigenous people, which in turn have contact with the outside world,” says Spooner.

Many of the Amazon tribes choose to avoid contact with outsiders because they have had unpleasant encounters in the past. The Mashco-Piro, for example, abandoned their settled gardens and fled into the forest. According to Glenn Shepard, an ethnologist at the Emilio Goeldi Museum in Belem, Brazil, this came after rubber companies massacred tribespeople at the turn of the 20th century. For this reason, some researchers refer to such tribes as “voluntarily isolated”, rather than uncontacted.

More recent incursions, especially by miners, oil workers and loggers, may have reinforced the tribes’ xenophobia. In 1995, oilfields were encroaching on the homeland of the uncontacted Huaorani people of eastern Peru. A visitingNew Scientist reporter was warned that any unclothed native should be regarded as uncontacted and, thus, very dangerous.

Are there guidelines for how best to approach such tribes?
In Peru, laws prohibit outsiders from initiating contact with isolated groups in most cases. They also provide protected areas where tribes can live in peace – but there are loopholes that allow oil and mining companies into the region. Brazil has similar laws and policies that allow contact only in life-threatening situations.

Anthropologists have an ethical obligation to do no harm to their research subjects, according to the American Anthropological Association’s Statement on Ethics. However, they are rarely the first people to make contact with indigenous groups – missionaries and resource developers almost always get there first, says Kim Hill, an anthropologist at Arizona State University who has worked with several recently contacted tribes. As a result, there is no standard practice for initial contact, he says.

Why would tribes choose to end their isolation?
Often, they feel forced out by encroaching civilisation, says Spooner. Survival International has documented some cases where settlements have been bulldozed and tribespeople harassed – or even killed. This leaves the survivors feeling like they have no option but to give up.

Others see a more benign process at work, at least some of the time. Tribes may seek contact with outsiders because they begin to trust their intentions, says Hill. “As soon as the tribes believe they might have some peaceful contact, all these groups want some outside interaction,” he says. “It’s a human trait to want to expand our contacts.” Modern medicine, metal tools and education can also exert a powerful pull.

What happens then?
Often, there is a lot of disease because the tribespeople are exposed to novel pathogens. It is not uncommon for half the population to die of respiratory illness – unless outsiders bring sustained medical care, says Hill. Also, the newly integrated tribespeople frequently end up on the lowest rung of the society they join. Still, he says, when he interviews such people years later, “I don’t find anyone, pretty much, who would want to go back to the old situation.”