Arquivo mensal: novembro 2014

Latour on digital methods (Installing [social] order)

Capture

In a fascinating, apparently not-peer-reviewed non-article available free online here, Tommaso Venturini and Bruno Latour discuss the potential of “digital methods” for the contemporary social sciences.

The paper summarizes, and quite nicely, the split of sociological methods to the statistical aggregate using quantitative methods (capturing supposedly macro-phenomenon) and irreducibly basic interactions using qualitative methods (capturing supposedly micro-phenomenon). The problem is that neither of which aided the sociologist in capture emergent phenomenon, that is, capturing controversies and events as they happen rather than estimate them after they have emerged (quantitative macro structures) or capture them divorced from non-local influences (qualitative micro phenomenon).

The solution, they claim, is to adopt digital methods in the social sciences. The paper is not exactly a methodological outline of how to accomplish these methods, but there is something of a justification available for it, and it sounds something like this:

Thanks to digital traceability, researchers no longer need to choose between precision and scope in their observations: it is now possible to follow a multitude of interactions and, simultaneously, to distinguish the specific contribution that each one makes to the construction of social phenomena. Born in an era of scarcity, the social sciences are entering an age of abundance. In the face of the richness of these new data, nothing justifies keeping old distinctions. Endowed with a quantity of data comparable to the natural sciences, the social sciences can finally correct their lazy eyes and simultaneously maintain the focus and scope of their observations.

Colombian indigenous court convicts Farc guerrillas (BBC)

BBC

10 November 2014 Last updated at 03:22 GMT

The seven accused appeared before some 3,000 members of the Nasa community

_78876722_024671329

An indigenous court in western Colombia has convicted seven left-wing Farc guerrillas over the murder of two leaders of the Nasa tribe.

Five were sentenced to between 40 and 60 years in jail and two others will receive 20 lashes.

The two victims had been removing posters praising a Farc leader when they were killed.

Indigenous authorities in Colombia have jurisdiction in their own territories unless this contravenes national law.

The verdict and sentences were decided after several hours of debate by an assembly of about 3,000 members from the indigenous reserve in the Cauca province town of Toribio.

Gabriel Pavi, leader of the Northern Cauca indigenous councils association, said the guerrillas were captured “in uniform and with rifles” and that “all are indigenous”.

The harshest sentence – 60 years in jail – was given to a man who confessed to killing the two native leaders.

Four other defendants received 40 years each for having “fired indiscriminately” on other members of the community, said Mr Pavi.

Two teenagers also arrested – reportedly aged 14 and 17 – were sentenced to 20 lashes. They are to be held at a rehabilitation centre until they are 18, at which point a new assembly will reconsider their cases.

Following the trial, the weapons used by the guerrillas were destroyed in front of the tribal court.

The sentences will be served in the state prison at Popayan, capital of Cauca.

_78876780_024671717

The guerrillas’ guns were destroyed in front of the indigenous court

Como falar sobre a crise hídrica na sala de aula (Porvir)

24/10/2014 – 12h57

por Marina Lopes, do Porvir

A falta de água pode servir de gancho para discutir sobre gestão de recursos hídricos e consumo consciente

Nos últimos meses, as discussões sobre a água e o consumo consciente ganharam espaço em razão do período de seca nas regiões sudeste e nordeste e com a crise no abastecimento que atinge o estado de São Paulo, maior metrópole do país. Atualmente, o Sistema Cantareira, principal responsável por abastecer a região, opera com apenas 3% do volume dos seus reservatórios. Diante desse cenário, como o professor pode discutir o tema em sala de aula? O Porvir conversou com alguns especialistas e reuniu uma lista com dicas de recursos digitais que podem auxiliar os educadores.

Segundo o geógrafo Wagner Costa Ribeiro, da Universidade de São Paulo, a escola precisa mudar a forma como trata sobre os recursos hídricos nacionais. “A criança e o adolescente não podem ter o mito da abundância da água reforçado.” Para ele, o Brasil tem um nível bastante elevado, mas essa água está distribuída de maneira desigual. “Ela é abundante na escala nacional, mas é muito escassa em locais como a região metropolitana de São Paulo”, apontou Wagner.

Educacaocrisehidrica Como falar sobre a crise hídrica na sala de aula

O especialista acredita que a crise vivida na cidade representa um problema de gestão, já que nos últimos anos não foram adotadas medidas voltadas para a ampliar os sistemas de captação, diminuir perdas durante o armazenamento e estimular reuso da água. “Infelizmente, nada disso foi realizado. Em um período mais seco, não temos ações de contingência”, afirmou.

O momento de crise, onde parte da população fica sem água nas torneiras durante horas ou até dias, pode servir para despertar a discussão sobre o uso da água. “A ideia é que o consumo consciente seja um hábito trabalhado desde a infância”, defendeu Denise Conselheiro, coordenadora do Edukatu, rede de aprendizagem sobre consumo consciente. Segundo ela, isso garante que as próximas gerações tenham essas práticas muito mais incorporadas ao seu dia a dia.

De acordo com a representante do Edukatu, para falar sobre esse tema na escola, o professor deve recorrer ao uso de atividades lúdicas e a uma linguagem divertida. “A abordagem precisa ser diferente”. Além disso, é preciso trazer as questões sobre o uso da água para o cotidiano do aluno, como o risco de desperdício dentro da própria escola.

Uma sugestão de atividade, apresentada por Wagner Costa Ribeiro, da USP, é de pedir para os alunos levarem a conta de água para escola. Na sala de aula, o professor pode comparar o consumo de cada família com a média geral da turma. A partir daí, ele consegue discutir maneiras de promover o uso racional dos recursos hídricos. No ensino médio, ele também pode acrescentar o debate sobre o modelo de gestão hídrica adotado na cidade.

A partir de buscas em sites como a Escola Digital, o Portal do Professor (MEC) e o Edukatu, o Porvir reuniu algumas dicas de recursos digitais que podem auxiliar os professores a falarem sobre o tema. Confira a lista:

Água em números

Com a linguagem de um infográfico animado, o vídeo apresenta dados da distribuição de água no planeta, consumo e desperdício em situações do dia a dia. A animação mostra que um buraco de três milímetros no encanamento, por exemplo, pode desperdiçar 3.200 litros de água por dia.

Etapa: ensino fundamental e médio
Disponível on-line
Fonte: Escola Digital

Como prevenir a seca

Produzido pela equipe do site Planeta Sustentável, o infográfico apresenta alternativas para o uso racional da água. A arte também divide o consumo de acordo com o segmento – agricultura, indústrias ou uso doméstico. Segundo os dados apresentados no infográfico, o setor agrícola é responsável por 70% do consumo global.

Etapa: ensino fundamental e médio
Disponível on-line
Fonte: Escola Digital

Quadrinhos sobre a água

A história em quadrinhos fala sobre a importância da água e como ela está distribuída no planeta. A partir dos diálogos entre os personagens, o aluno pode perceber que a água existe em abundancia no globo, mas apenas uma pequena parte dela é própria para o consumo.

Etapa: ensino fundamental
Disponível on-line
Fonte: Escola Digital

Atividades sobre o uso da água

Disponíveis para download, o conjunto de atividades reúne jogos e testes sobre o tema água. O material tenta conscientizar o aluno sobre a importância de promover o uso racional dos recursos hídricos.

Etapa: ensino fundamental
Disponível offline
Fonte: Portal do Professor

Atividades sobre a importância da água

O recurso digital reúne materiais que falam sobre a importância da água no meio ambiente. Além disso, as atividades também tratam sobre a constituição hídrica do planeta e como ela é disponibilizada para o consumo humano.

Etapa: ensino fundamental
Disponível offline
Fonte: Portal do Professor

Como a água chega até as nossas torneiras?

A imagem ilustra o caminho que a água percorre, desde quando é retirada da natureza, até o momento em que chega às torneiras de uma casa. Também é possível ver alguns processos de armazenamento de água nas estações de tratamento.

Etapa: ensino fundamental
Disponível offline
Fonte: Portal do Professor

Percurso da Água no Edukatu

No Edukatu o professor conta um percurso de aprendizado inteiro dedicado ao tema água. O material está disponível em duas fases: na primeira, ele apresenta recursos digitais que ampliam o conhecimento sobre a temática de forma lúdica; na segunda parta, é apresentado para o educador a proposta de desenvolver um projeto de intervenção no ambiente escolar, podendo incluir ações de conscientização sobre o uso racional da água.

(obs: para ter acesso ao material, o professor deve realizar um cadastro no site)

Etapa: ensino fundamental e médio
Disponível on-line
Fonte: Edukatu

* Publicado originalmente no site Porvir.

(Porvir)

Médicos que ‘ressuscitam mortos’ querem testar técnica em humanos (BBC)

Técnica para estender vidas por algumas horas nunca foi testada em humanos

“Quando seu corpo está com temperatura de 10 graus, sem atividade cerebral, batimento cardíaco e sangue – é um consenso que você está morto”, diz o professor Peter Rhee, da universidade do Arizona. “Mas ainda assim, nós conseguimos trazer você de volta.”

Rhee não está exagerando. Com Samuel Tisherman, da Universidade de Maryland, nos Estados Unidos, ele comprovou que é possível manter o corpo em estado “suspenso” por horas.

O procedimento já foi testado com animais e é o mais radical possível. Envolve retirar todo o sangue do corpo e esfriá-lo até 20 graus abaixo da sua temperatura normal.

Quando o problema no corpo do paciente é resolvido, o sangue volta a ser bombeado, reaquecendo lentamente o sistema. Quando a temperatura do sangue chega a 30 graus, o coração volta a bater.

Os animais submetidos a esse teste tiveram poucos efeitos colaterais ao despertar. “Eles ficam um pouco grogue por um tempo, mas no dia seguinte já estão bem”, diz Tisherman.

Testes com humanos

Tisherman causou um frisson internacional este ano quando anunciou que está pronto para fazer testes com humanos. As primeiras cobaias seriam vítimas de armas de fogo em Pittsburgh, na Pensilvânia.

Nesse caso, são pacientes cujos corações já pararam de bater e que não teriam mais chances de sobreviver, pelas técnicas convencionais. O médico americano teme que, por conta de manchetes imprecisas na imprensa, tenha-se criado uma ideia equivocada da sua pesquisa

Peter Rhee ajudou a criar técnica inovadora que envolve retirar o sangue do paciente

“Quando as pessoas pensam no assunto, elas pensam em viajantes espaciais sendo congelados e acordados em Júpiter, ou no [personagem] Han Solo, de Guerra nas Estrelas”, diz Tisherman.

“Isso não ajuda, porque é importante que as pessoas saibam que não se trata de ficção científica.”

Os esforços para trazer as pessoas de volta do que se acredita ser a morte já existem há décadas. Tisherman começou seus estudos com Peter Safar, que nos anos 1960 criou a técnica pioneira de reanimação cardiorrespiratória. Com uma massagem cardíaca, é possível manter o coração artificialmente ativo por um tempo.

“Sempre fomos criados para acreditar que a morte é um momento absoluto, e que quando morremos não tem mais volta”, diz Sam Parnia, da Universidade Estadual de Nova York.

“Com a descoberta básica da reanimação cardiorrespiratória nós passamos a entender que as células do corpo demoram horas para atingir uma morte irreversível. Mesmo depois que você já virou um cadáver, ainda existe como resgatá-lo.”

Recentemente, um homem de 40 anos no Texas sobreviveu por três horas e meia com a reanimação cardiorrespiratória.

Segundo os médicos de plantão, “todo mundo com dois braços foi chamado para se revezar fazendo as compressões no peito do paciente”.

Durante a massagem, ele continuava consciente e conversando com os médicos, mas caso o procedimento fosse interrompido, ele morreria. Eventualmente ele se recuperou e acabou sobrevivendo.

Esse caso de rescucitação ao longo de um grande período só funcionou porque não havia uma grande lesão no corpo do paciente. Mas isso é raro.

‘Limbo’

A técnica desenvolvida agora por Tisherman é baseada na ideia de que baixas temperaturas mantêm o corpo vivo por mais tempo – cerca de uma ou duas horas.

O sangue é retirado e no seu lugar é colocada uma solução salina que ajuda a rebaixar a temperatura do corpo para algo como 10 a 15 graus Celsius.

Em experiência com porcos, cerca de 90% deles se recuperaram quando o sangue foi bombeado de volta. Cada animal passou mais de uma hora no “limbo”.

Técnica de massagem cardíaca já ajuda a estender a vida de pessoas com paradas

“É uma das coisas mais incríveis de se observar: quando o coração começa a bater de novo”, diz Rhee.

Após a operação, foram realizados vários testes para avaliar se houve dano cerebral. Aparentemente nenhum porco apresentou problemas.

O desafio de obter permissão para testar em humanos tem sido enorme até agora. Tisherman e Rhee finalmente receberam permissão para testar sua técnica com vítimas de tiros em Pittsburgh.

Um dos problemas a ser contornado é ver como os pacientes se adaptam com o sangue de outra pessoa. Os porcos receberam o próprio sangue congelado, mas no caso dos humanos será necessário usar o estoque do banco de sangues.

Se der certo, os médicos acreditam que a técnica poderia ser aplicada não só vítimas de lesões, como tiros e facadas, mas em pessoas com ataque cardíaco.

A pesquisa também está levando a outros estudos sobre qual seria a melhor solução química para reduzir o metabolismo do corpo humano.

Leia a versão desta reportagem original em inglês no site BBC Future.

Transitions between states of matter: It’s more complicated, scientists find (Science Daily)

Date: November 6, 2014

Source: New York University

Summary: The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. New work reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms.

Melting ice. The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. Credit: © shefkate / Fotolia

The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known, according to research based at Princeton University, Peking University and New York University.

Their study, which appears in the journal Science, reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms. The researchers examined the way that a phase change, specifically the melting of a solid, occurs at a microscopic level and discovered that the transition is far more involved than earlier models had accounted for.

“This research shows that phase changes can follow multiple pathways, which is counter to what we’ve previously known,” explains Mark Tuckerman, a professor of chemistry and applied mathematics at New York University and one of the study’s co-authors. “This means the simple theories about phase transitions that we teach in classes are just not right.”

According to Tuckerman, scientists will need to change the way they think of and teach on phase changes.

The work stems from a 10-year project at Princeton to develop a mathematical framework and computer algorithms to study complex behavior in systems, explained senior author Weinan E, a professor in Princeton’s Department of Mathematics and Program in Applied and Computational Mathematics. Phase changes proved to be a crucial test case for their algorithm, E said. E and Tuckerman worked with Amit Samanta, a postdoctoral researcher at Princeton now at Lawrence Livermore National Laboratory, and Tang-Qing Yu, a postdoctoral researcher at NYU’s Courant Institute of Mathematical Sciences.

“It was a test case for the rather powerful set of tools that we have developed to study hard questions about complex phenomena such as phase transitions,” E said. “The melting of a relatively simple atomic solid such as a metal, proved to be enormously rich. With the understanding we have gained from this case, we next aim to probe more complex molecular solids such as ice.”

The findings reveal that phase transition can occur via multiple and competing pathways and that the transitions involve at least two steps. The study shows that, along one of these pathways, the first step in the transition process is the formation of point defects — local defects that occur at or around a single lattice site in a crystalline solid. These defects turn out to be highly mobile. In a second step, the point defects randomly migrate and occasionally meet to form large, disordered defect clusters.

This mechanism predicts that “the disordered cluster grows from the outside in rather than from the inside out, as current explanations suggest,” Tuckerman notes. “Over time, these clusters grow and eventually become sufficiently large to cause the transition from solid to liquid.”

Along an alternative pathway, the defects grow into thin lines of disorder (called “dislocations”) that reach across the system. Small liquid regions then pool along these dislocations, these regions expand from the dislocation region, engulfing more and more of the solid, until the entire system becomes liquid.

This study modeled this process by tracing copper and aluminum metals from an atomic solid to an atomic liquid state. The researchers used advanced computer models and algorithms to reexamine the process of phase changes on a microscopic level.

“Phase transitions have always been something of a mystery because they represent such a dramatic change in the state of matter,” Tuckerman observes. “When a system changes from solid to liquid, the properties change substantially.”

He adds that this research shows the surprising incompleteness of previous models of nucleation and phase changes–and helps to fill in existing gaps in basic scientific understanding.

This work is supported by the Office of Naval Research (N00014-13-1-0338), the Army Research Office (W911NF- 11-1-0101), the Department of Energy (DE-SC0009248, DE-AC52-07NA27344), and the National Science Foundation of China (CHE-1301314).


Journal Reference:

  1. A. Samanta, M. E. Tuckerman, T.-Q. Yu, W. E. Microscopic mechanisms of equilibrium melting of a solid. Science, 2014; 346 (6210): 729 DOI:10.1126/science.1253810

Ghost illusion created in the lab (Science Daily)

Date: November 6, 2014

Source: Ecole Polytechnique Fédérale de Lausanne

Summary: Patients suffering from neurological or psychiatric conditions have often reported ‘feeling a presence’ watching over them. Now, researchers have succeeded in recreating these ghostly illusions in the lab.

This image depicts a person experiencing the ghost illusion in the lab. Credit: Alain Herzog/EPFL

Ghosts exist only in the mind, and scientists know just where to find them, an EPFL study suggests. Patients suffering from neurological or psychiatric conditions have often reported feeling a strange “presence.” Now, EPFL researchers in Switzerland have succeeded in recreating this so-called ghost illusion in the laboratory.

On June 29, 1970, mountaineer Reinhold Messner had an unusual experience. Recounting his descent down the virgin summit of Nanga Parbat with his brother, freezing, exhausted, and oxygen-starved in the vast barren landscape, he recalls, “Suddenly there was a third climber with us… a little to my right, a few steps behind me, just outside my field of vision.”

It was invisible, but there. Stories like this have been reported countless times by mountaineers, explorers, and survivors, as well as by people who have been widowed, but also by patients suffering from neurological or psychiatric disorders. They commonly describe a presence that is felt but unseen, akin to a guardian angel or a demon. Inexplicable, illusory, and persistent.

Olaf Blanke’s research team at EPFL has now unveiled this ghost. The team was able to recreate the illusion of a similar presence in the laboratory and provide a simple explanation. They showed that the “feeling of a presence” actually results from an alteration of sensorimotor brain signals, which are involved in generating self-awareness by integrating information from our movements and our body’s position in space.

In their experiment, Blanke’s team interfered with the sensorimotor input of participants in such a way that their brains no longer identified such signals as belonging to their own body, but instead interpreted them as those of someone else. The work is published in Current Biology.

Generating a “Ghost”

The researchers first analyzed the brains of 12 patients with neurological disorders — mostly epilepsy — who have experienced this kind of “apparition.” MRI analysis of the patients’s brains revealed interference with three cortical regions: the insular cortex, parietal-frontal cortex, and the temporo-parietal cortex. These three areas are involved in self-awareness, movement, and the sense of position in space (proprioception). Together, they contribute to multisensory signal processing, which is important for the perception of one’s own body.

The scientists then carried out a “dissonance” experiment in which blindfolded participants performed movements with their hand in front of their body. Behind them, a robotic device reproduced their movements, touching them on the back in real time. The result was a kind of spatial discrepancy, but because of the synchronized movement of the robot, the participant’s brain was able to adapt and correct for it.

Next, the neuroscientists introduced a temporal delay between the participant’s movement and the robot’s touch. Under these asynchronous conditions, distorting temporal and spatial perception, the researchers were able to recreate the ghost illusion.

An “Unbearable” Experience

The participants were unaware of the experiment’s purpose. After about three minutes of the delayed touching, the researchers asked them what they felt. Instinctively, several subjects reported a strong “feeling of a presence,” even counting up to four “ghosts” where none existed. “For some, the feeling was even so strong that they asked to stop the experiment,” said Giulio Rognini, who led the study.

“Our experiment induced the sensation of a foreign presence in the laboratory for the first time. It shows that it can arise under normal conditions, simply through conflicting sensory-motor signals,” explained Blanke. “The robotic system mimics the sensations of some patients with mental disorders or of healthy individuals under extreme circumstances. This confirms that it is caused by an altered perception of their own bodies in the brain.”

A Deeper Understanding of Schizophrenia

In addition to explaining a phenomenon that is common to many cultures, the aim of this research is to better understand some of the symptoms of patients suffering from schizophrenia. Such patients often suffer from hallucinations or delusions associated with the presence of an alien entity whose voice they may hear or whose actions they may feel. Many scientists attribute these perceptions to a malfunction of brain circuits that integrate sensory information in relation to our body’s movements.

“Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease — or, in this case, a robot — this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”

It is unlikely that these findings will stop anyone from believing in ghosts. However, for scientists, it’s still more evidence that they only exist in our minds.

Watch the video: http://youtu.be/GnusbO8QjbE


Journal Reference:

  1. Olaf Blanke, Polona Pozeg, Masayuki Hara, Lukas Heydrich, Andrea Serino, Akio Yamamoto, Toshiro Higuchi, Roy Salomon, Margitta Seeck, Theodor Landis, Shahar Arzy, Bruno Herbelin, Hannes Bleuler, Giulio Rognini. Neurological and Robot-Controlled Induction of an Apparition. Current Biology, 2014; DOI:10.1016/j.cub.2014.09.049

A bolha global de carbono (Eco21)

06/11/2014 – 12h25

por Ricardo Abramovay*

carbono1 A bolha global de carbonoOs combustíveis fósseis são fortes candidatos a ocupar o epicentro de uma nova crise financeira global. A avaliação do jornalista Ambrose Evans-Pritchard está baseada em uma série de entrevistas com influentes protagonistas do setor de energia e em dois relatórios recentes sobre os impactos das negociações climáticas sobre estes mercados. Tendo em vista que, do trilhão de reais que, segundo o BNDES, devem ser investidos em infraestrutura no Brasil até 2017, quase metade vai para o setor de óleo e gás, o tema é de interesse estratégico para o País.

O primeiro relatório é o da Carbon Track Initiative, um grupo de trabalho dirigido pelo empresário, pesquisador e ativista Jeremy Leggett e que ganhou imenso prestígio internacional mostrando a existência de uma bolha de carbono (carbon bubble) no mercado global de energia. A expressão tem um duplo sentido, físico e financeiro. A bolha física está relacionada, evidentemente, à mudança climática. Para cumprir o objetivo de limitar a elevação da temperatura global média a, no máximo, 2°C, até o final do Século 21, a quantidade de fósseis a ser queimada pelo sistema econômico não pode ultrapassar o que corresponde à emissão de algo entre 900 e 1.000 gigatoneladas de Gases de Efeito Estufa entre 2010 e 2050. Ocorre que o patrimônio fóssil em mãos das empresas (em petróleo, carvão e gás) é quase três vezes superior a esse limite.

É nesse sentido que há uma bolha de carbono: este patrimônio só se converterá em riqueza se destruir o sistema climático. Em tese, seria possível capturar e armazenar o carbono lançado na atmosfera: mas, até hoje, os custos dessas operações são exorbitantes e não há indicações de que estejam prestes a se tornar economicamente viáveis. Portanto, não há terceiro caminho: ou se deixa sob o solo dois terços das reservas fósseis em poder dos gigantes da energia ou a elevação da temperatura global média chegará a um patamar em que consequências como a seca atual na Califórnia e os furacões Katrina e Sandy são apenas pálidas expressões.

É aí que ganha importância a dimensão financeira da bolha de carbono: apesar das evidências crescentes reunidas pelos cientistas e do acordo internacional (aprovado em 2010, em Cancún, México) de manter a elevação da temperatura aquém de 2°C, grandes empresas e seus financiadores continuam enxergando nos fósseis uma extraordinária fonte potencial de ganhos.

A Agência Internacional de Energia mostra que, globalmente, os investimentos em combustíveis fósseis dobraram entre 2000 e 2008, quando se estabilizaram em um patamar de US$ 950 bilhões por ano. Isso representa, segundo recente relatório da organização Ceres, 3,3 vezes mais do que os investimentos em renováveis realizados em 2012. Nos últimos seis anos, os gastos globais na busca de fósseis foram de US$ 5,4 trilhões. Praticamente todo esse investimento é feito em fontes não convencionais: areias betuminosas (sobretudo no Canadá), exploração no Ártico, gás de xisto e busca em águas profundas no Brasil e no Golfo do México. Essas fontes não convencionais exigem um esforço (e, portanto, têm um custo) muito maior que as convencionais. Elas só se viabilizam se o preço global do petróleo superar um patamar em torno de US$ 75 o barril.

Mas, se houver um acordo internacional para impedir a ruptura do sistema climático, a consequência será a queda na demanda e, portanto, nos preços dos fósseis. O crescimento exponencial das energias renováveis (a China dobrou sua geração solar nos primeiros seis meses de 2014, relativamente ao mesmo período do ano anterior) também deve resultar em menor demanda por fósseis. Portanto, o risco financeiro em torno dessa corrida à produção de fósseis é imenso.

O segundo relatório citado por Ambrose Evans-Pritchard e no qual se apoia a hipótese de crise financeira global, vem da consultoria Kepler Cheuvreux. Ele calcula as perdas financeiras dos gigantes da energia, caso um acordo para preservar o sistema climático seja alcançado. Nos próximos 20 anos, o prejuízo seria de US$ 28 trilhões, dos quais US$ 19,3 trilhões no setor de petróleo.

Os segmentos mais suscetíveis são justamente os não convencionais: Ártico, areias betuminosas e águas profundas. De que maneira esses números se relacionam com nosso pré-sal é um tema cuja discussão não pode ficar apenas entre especialistas.

* Ricardo Abramovay é professor Titular do Departamento de Economia da FEA/USP.

** Publicado originalmente no site Eco21.

(Eco21)

Em meio à crise hídrica, São Paulo usará esgoto tratado no abastecimento (Agência Brasil)

De acordo com o governo estadual, o esgoto, após tratado, será lançado à Represa Guarapiranga e ao Rio Cotia

São Paulo passará a usar água de reúso (esgoto tratado) no abastecimento da população, anunciou o governador Geraldo Alckmin. A medida também será adotada por Campinas, cidade localizada a 100 quilômetros da capital. Os municípios paulistas passam pela maior crise de abastecimento de água já enfrentada.

De acordo com o governo estadual, o esgoto, após tratado, será lançado à Represa Guarapiranga e ao Rio Cotia. Misturada ao manancial, essa água é novamente tratada e transformada em água potável. A previsão é que as obras, que incluem duas estações de produção de água de reúso com capacidade de gerar três mil litros por segundo, sejam entregues em dezembro de 2015. Os empreendimentos estão em andamento desde julho de 2013, com investimentos de R$ 76,5 milhões.

Outra medida adotada pelo governo paulista é o aumento da retirada de água no Guarapiranga. O fornecimento neste mês será de mil litros por segundo, quantidade que atende a 300 mil habitantes. Serão construídos 29 reservatórios para ampliar em 10% a produção de água. A meta é que todos fiquem prontos até o fim de 2015. O investimento alcançará R$ 169 milhões.

No município de Campinas, a Sociedade de Abastecimento de Água e Esgoto (Sanasa) anunciou, no final de outubro, que vai modificar a Estação de Tratamento de Esgoto Anhumas para produzir água de reúso. Essa água é lançada no Rio Atibaia com 99% de pureza, assegura o prefeito Jonas Donizette.

Um sistema adutor, executado em parceria com o Aeroporto Internacional de Viracopos, que já adota a tecnologia, vai também levar água da Estação Produtora de Água de Reúso (Epar) para o Rio Capivari. No total, a ampliação no volume de água, que deve estar pronta em dois anos, chega a 290 litros por segundo no Rio Capivari e 600 litros por segundo no Rio Atibaia.

O especialista em Recursos Hídricos da Universidade Estadual de Campinas, Antônio Carlos Zuffo, avalia que o abastecimento com o esgoto tratado é uma boa solução. “Vivemos numa região que não produz água suficiente para o abastecimento e vamos ter que fazer o reúso. Só que esse reúso tem que passar pelo tratamento, jogar num rio ou numa lagoa para aumentar o tratamento natural. Depois capta novamente a água e passa pela estação de tratamento. Então, passa por três tipos de tratamentos.”

Zuffo esclarece que a água de esgoto tratada lançada aos rios chega a ter mais qualidade que a água encontrada nos mananciais. Segundo ele, muitas cidades do mundo já utilizam essa água, como é caso da Califórnia, nos Estados Unidos. Lá, o efluente é injetado no solo, para que passe pelo filtro natural e retorne em nascentes para ser captado.

“O mito nessa história é que efluente de esgoto, a gente não pode consumir. A estação de tratamento de esgoto não trata 100%, não torna potável, mas lança no curso de água já com uma qualidade melhor que alguns lançamentos diretos”, disse. De acordo o especialista, a água presente nos rios, muitas vezes, tem qualidade inferior por conter esgoto diluído, jogado de forma irregular.

(Fernanda Cruz / Agência Brasil)

http://agenciabrasil.ebc.com.br/geral/noticia/2014-11/em-meio-crise-hidrica-sao-paulo-usara-esgoto-tratado-no-abastecimento

Saiba Mais

Programa irá financiar cinco projetos na área de desastres naturais (Capes)

5060, 6 de novembro de 2014

Programa irá financiar cinco projetos na área de desastres naturais

A divulgação aconteceu nesta quarta-feira (05/11)

A Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes) divulga nesta quarta-feira, 5, o resultado final da seleção do Programa de Apoio ao Ensino e à Pesquisa Científica e Tecnológica em Desastres Naturais (Pró-Alertas).Foram aprovados cinco projetos.

O pró-Alertas tem como objetivo estimular e apoiar a realização de projetos conjuntos de pesquisa no país para a formação de recursos humanos em nível de pós-graduação stricto sensu acadêmico, por meio do desenvolvimento de pesquisa científica e tecnológica interdisciplinares na área de Desastres Naturais.

A iniciativa enquadra-se nas diretrizes da Capes de indução temporária de áreas estratégicas da política brasileira de ciência, tecnologia e inovação. A iniciativa conta com apoio do Ministério de Ciência, Tecnologia e Inovação (MCTI) e pretende contribuir para a consolidação do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (CEMADEN).

Benefícios
Os projetos aprovados receberão recursos para bolsas de iniciação científica, doutorado e pós-doutorado, além de passagens aéreas para missões de pesquisa no Brasil ou no exterior e diárias para participação em eventos acadêmicos em temas relacionados ao projeto no exterior.

Acesse o resultado.

(CCS/Capes)

http://www.capes.gov.br/sala-de-imprensa/noticias/7213-programa-ira-financiar-cinco-projetos-interdisciplinares-na-area-de-desastres-naturais

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.


Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Humans, baboons share cumulative culture ability (Science Daily)

Date: November 5, 2014

Source: Le Centre national de la recherche scientifique (CNRS)

Summary: The ability to build up knowledge over generations, called cumulative culture, has given humankind language and technology. While it was thought to be limited to humans until now, researchers have recently found that baboons are also capable of cumulative culture.

Baboon using a touch screen. Credit: © 2014 Nicolas Claidière

The ability to build up knowledge over generations, called cumulative culture, has given mankind language and technology. While it was thought to be limited to humans until now, researchers from the Laboratoire de psychologie cognitive (CNRS/AMU), working in collaboration with colleagues at the University of Edinburgh (UK), have recently found that baboons are also capable of cumulative culture. Their findings are published in Proceedings of the Royal Society B on 5 November 2014.

Humankind is capable of great accomplishments, such as sending probes into space and eradicating diseases; these achievements have been made possible because humans learn from their elders and enrich this knowledge over generations. It was previously thought that this cumulative aspect of culture — whereby small changes build up, are transmitted, used and enriched by others — was limited to humans, but it has now been observed in another primate, the baboon.

While it is clear that monkeys like chimpanzees learn many things from their peers, each individual seems to start learning from scratch. In contrast, humans use techniques that evolve and improve from one generation to the next, and also differ from one population to another. The origin of cumulative culture in humans has therefore remained a mystery to scientists, who are trying to identify the necessary conditions for this cultural accumulation.

Nicolas Claidière and Joël Fagot, of the Laboratoire de psychologie cognitive, conducted the present study at the CNRS Primatology Center in Rousset, southeastern France. Baboons live in groups there and have free access to an area with touch screens where they can play a “memory game” specifically designed for the study. The screen briefly displays a grid of 16 squares, four of which are red and the others white. This image is then replaced by a similar grid, but composed of only white squares, and the baboons must touch the four squares that were previously red. Phase one of the experiment started with a task-learning period in which the position of the four red squares was randomized. Phase two comprised a kind of visual form of “Chinese whispers” wherein information was transmitted from one individual to another. In this second phase, a baboon’s response (the squares touched on the screen) was used to generate the next grid pattern that the following baboon had to memorize and reproduce, and so on for 12 “generations.”

The researchers, in collaboration with Simon Kirby and Kenny Smith from the University of Edinburgh, noted that baboons performed better in the phase involving a transmission chain (compared with random testing, which continued throughout the period of the experiment): success rate (1) increased from 80% to over 95%. Due to errors by the baboons, the patterns evolved between the beginning and the end of each chain. Yet to the surprise of researchers, the random computer-generated patterns were gradually replaced by “tetrominos” (Tetris®-like shapes composed of four adjacent squares), even though these forms represent only 6.2% of possible configurations! An even more surprising result was that the baboons’ performance on these rare shapes was poor during random testing, but increased throughout the transmission chain, during which the tetrominos accumulated. Moreover, when the experiment was replicated several times, the starting patterns did not lead to the same set of tetrominos. This study shows that, like humans, baboons have the ability to transmit and accumulate changes over “cultural generations” and that these incremental changes, which may differ depending on the chain, become structured and more efficient.

Researchers have ensured that all the necessary conditions were present to observe a type of cumulative cultural evolution in non-human primates, with its three characteristic properties (progressive increase in performance, emergence of systematic structures, and lineage specificity). These results show that cumulative culture does not require specifically human capacities, such as language. So why have no examples of this type of cultural evolution been clearly identified in the wild? Perhaps because the utilitarian dimension of non-human primate culture (e.g., the development of tools) hinders such evolution.

(1) The task was considered successful if at least 3 out of 4 squares were correctly memorized.


Journal Reference:

  1. N. Claidière, K. Smith, S. Kirby, J. Fagot. Cultural evolution of systematically structured behaviour in a non-human primate. Proceedings of the Royal Society B, November 2014 DOI: 10.1098/rspb.2014.1541

Biggest Brazil Metro Area Desperate for Water (AP)

ITU, Brazil — Nov 7, 2014, 10:23 AM ET

APTOPIX Brazil Running Out of Water

It’s been nearly a month since Diomar Pereira has had running water at his home in Itu, a commuter city outside Sao Paulo that is at the epicenter of the worst drought to hit southeastern Brazil in more than eight decades.

Like others in this city whose indigenous name means “big waterfall,” Pereira must scramble to find water for drinking, bathing and cooking. On a recent day when temperatures hit 90 degrees (32 Celsius), he drove to a community kiosk where people with empty soda bottles and jugs lined up to use a water spigot. Pereira filled several 13-gallon containers, which he loaded into his Volkswagen bug.

“I have a job and five children to raise and am always in a rush to find water so we can bathe,” said Pereira, a truck driver who makes the trip to get water every couple of days. “It’s very little water for a lot of people.”

Brazil is approaching the December start of its summer rainy season with its water supply nearly bare. More than 10 million people across Sao Paulo state, Brazil’s most populous and the nation’s economic engine, have been forced to cut water use over the past six months. A reservoir used by Itu has fallen to 2 percent of capacity and, because its system relies on rain and groundwater rather than rivers, the city is suffering more than others.

In Itu, desperation is taking hold. Police escort water trucks to keep them from being hijacked by armed men. Residents demanding restoration of tap water have staged violent protests.

Restaurants and bars are using disposable cups to avoid washing dishes, and agribusinesses are transporting soybeans and other crops by road rather than by boat in areas where rivers have dried up.

“We are entering unknown territory,” said Renato Tagnin, an expert in water resources at the environmental group Coletivo Curupira. “If this continues, we will run out of water. We have no more mechanisms and no water stored in the closet.”

The Sao Paulo metropolitan area ended its last rainy season in February with just a third of the usual rain total — only 9 inches (23 centimeters) over three months. Showers in October totaled just 1 inch (25 millimeters), one-fifth of normal.

Only consistent, steady summer rains will bring immediate relief, experts say.

But they also place blame on the government, which they say needs to upgrade a state water distribution network that loses more than 30 percent of its resources to leaks. Advocates also call for treatment plants to produce more potable water, along with better environmental protections for headwaters and rivers flowing into reservoirs.

Tagnin and others say the government ignored calls to begin rationing water months ago because it didn’t want to take such a step before the October elections and risk losing votes. The government, however, maintains there will be no need for rationing. It says its measures to conserve water are working, such as offering discounted water bills for those who limit usage and reducing water pressure during off-peak hours.

But activists and consumer groups complain the government has done too little too late and failed to keep consumers informed.

The state’s largest utility, which supplies water to more than 16 million people in Sao Paulo’s metropolitan area, for months avoided acknowledging the looming shortage. Only recently did the Sabesp utility release maps showing which neighborhoods were at risk of water cuts, and was careful to avoid using the hot-button term “rationing.”

In Itu, where the taps have been dry for weeks, residents dream of rationing — At least that would mean some water for their homes.

“I forgot what water looks like coming out of the faucet,” said Rosa Lara Leite, a woman carrying a few gallons of water in each hand at one of the city’s crowded drinking fountains.

Authorities forced the city of 160,000 to cut its daily water consumption from 16 million gallons (62 million liters) to 2 million gallons (8 million liters). Dozens of water trucks are deployed to bring in water from far off towns. Huge 5,000-gallon tanks have been set up around the city.

“We understand that people’s basic need is water. They need it,” said Marco Antonio Augusto, spokesman for a government task force created to manage Itu’s water supply. “We are bringing water from every possible place.”

Baker Franciele Bonfim is storing whatever water she can get her hands on in every possible place. She and a neighbor recently paid $200 to buy water from a private water truck, storing it in two big tanks and about 20 plastic buckets that once held margarine for her cakes.

“It’s an added expense but at least I am good for 15 days,” Bonfim said, as she used a thick hose to pour water into each bucket. “It has taken me a long time to use all this margarine. But water runs out fast.”

The IPCC is stern on climate change – but it still underestimates the situation (The Guardian)

UN body’s warning on carbon emissions is hard to ignore, but breaking the power of the fossil fuel industry won’t be easy

The Guardian, Sunday 2 November 2014 10.59 GMT

Bangkok's skyline blanketed in a hazeBangkok’s skyline blanketed in a haze. The IPCC report says climate change has increased the risk of severe heatwaves and other extreme weather. Photograph: Adrees Latif/Reuters

At this point, the scientists who run the Intergovernmental Panel on Climate Change must feel like it’s time to trade their satellites, their carefully calibrated thermometers and spectrometers, their finely tuned computer models – all of them for a thesaurus. Surely, somewhere, there must be words that will prompt the world’s leaders to act.

This week, with the release of their new synthesis report, they are trying the words “severe, widespread, and irreversible” to describe the effects of climate change – which for scientists, conservative by nature, falls just short of announcing that climate change will produce a zombie apocalypse plus random beheadings plus Ebola. It’s hard to imagine how they will up the language in time for the next big global confab in Paris.

But even with all that, this new document – actually a synthesis of three big working group reports released over the last year – almost certainly underestimates the actual severity of the situation. As the Washington Post pointed out this week, past reports have always tried to err on the side of understatement; it’s a particular problem with sea level rise, since the current IPCC document does not even include the finding in May that the great Antarctic ice sheets have begun to melt. (The studies were published after the IPCC’s cutoff date.)

But when you get right down to it, who cares? The scientists have done their job; no sentient person, including Republican Senate candidates, can any longer believe in their heart of hearts that there’s not a problem here. The scientific method has triumphed: over a quarter of a century, researchers have reached astonishing consensus on a basic problem in chemistry and physics.

And the engineers have done just as well. The price of a solar panel has dropped by more than 90% over the last 25 years, and continues to plummet. In the few places they have actually been deployed at scale, the results are astonishing: there were days this summer when Germany generated 75% of its power from the wind and the sun.

That, of course, is not because Germany is so richly endowed with sunlight (it’s a rare person who books a North Sea beach holiday). It’s because the Germans have produced a remarkable quantity of political will, and put it to good use.

As opposed to the rest of the world, where the fossil fuel industry has produced an enormous amount of fear in the political class, and kept things from changing. Their vast piles of money have so far weighed more in the political balance than the vast piles of data accumulated by the scientists. In fact, the IPCC can calculate the size of the gap with great exactness. To get on the right track, they estimate, the world would have to cut fossil fuel investments annually between now and 2029, and use the money instead to push the pace of renewables.

That is a hard task, but not an impossible one. Indeed, the people’s movement symbolised by September’s mammoth climate march in New York, has begun to make an impact in dollars and cents. A new report this week shows that by delaying the Keystone pipeline in North America protesters have prevented at least $17bn (£10.6bn) in new investments in the tar sands of Canada – investments that would have produced carbon equivalent to 735 coal-fired power plants. That’s pretty good work.

Our political leaders could do much more, of course. If they put a serious price on carbon, we would move quickly out of the fossil fuel age and into the renewable future. But that won’t happen until we break the power of the fossil fuel industry. That’s why it’s very good news that divestment campaigners have been winning victories on one continent after another, as universities from Stanford to Sydney to Glasgow start selling their fossil fuel stocks in protest – hey, even the Rockefeller Brothers fund, heir to the greatest oil fortune ever, have joined in the fight.

Breaking the power of the fossil fuel industry won’t be easy, especially since it has to happen fast. It has to happen, in fact, before the carbon we’ve unleashed into the atmosphere breaks the planet. I’m not certain we’ll win this fight – but, thanks to the IPCC, no one will ever be able to say they weren’t warned.

Mudança climática (Folha de S.Paulo)

7/11/2014

Eduardo Giannetti

Em “Reasons and Persons”, uma das mais inovadoras obras de filosofia analítica dos últimos 30 anos, o filósofo Derek Parfit propõe um intrigante “experimento mental”. A situação descrita é hipotética, mas ajuda a explicitar um ponto nevrálgico do maior desafio humano: limitar o aquecimento global a 2°C acima do nível pré-industrial até o final do século 21.

Imagine uma pessoa afivelada a uma cama com eletrodos colados em suas têmporas. Ao se girar um botão situado em outro local a corrente nos eletrodos aumenta em grau infinitesimal, de modo que o paciente não chegue a sentir. Um Big Mac gratuito é então ofertado a quem girar o botão. Ocorre, contudo, que quando milhares de pessoas fazem isso –sem que cada uma saiba dos outros– a descarga de energia produzida é suficiente para eletrocutar a vítima.

Quem é responsável pelo que? Algo tenebroso foi perpetrado, mas a quem atribuir a culpa? O efeito isolado de cada giro do botão é por definição imperceptível –são todos “torturadores inofensivos”. Mas o resultado conjunto dessa miríade de ações é ofensivo ao extremo. Até que ponto a somatória de ínfimas partículas de culpa se acumula numa gigantesca dívida moral coletiva?

A mudança climática em curso equivale a uma espécie de eletrocussão da biosfera. Quem a deseja? Até onde sei, ninguém. Trata-se da alquimia perversa de inumeráveis atos humanos, cada um deles isoladamente ínfimo, mas que não resulta de nenhuma intenção humana. E quem assume –ou deveria assumir– a culpa por ela? A maioria e ninguém, ainda que alguns sejam mais culpados que outros.

Os 7 bilhões de habitantes do planeta pertencem a três grupos: cerca de 1 bilhão respondem por 50% das emissões totais de gases-estufa, ao passo que os 3 bilhões seguintes por 45%. Os 3 bilhões na base da pirâmide de energia (metade sem acesso a eletricidade) respondem por apenas 5%. Por seu modo de vida e vulnerabilidade, este grupo –o único inocente– será o mais tragicamente afetado pelo “giro de botão” dos demais.

Descarbonizar é preciso. Segundo o recém-publicado relatório do painel do clima da ONU, limitar o aquecimento a 2°C exigirá cortar as emissões antropogênicas de 40% a 70% em relação a 2010 até 2050 e zerá-las até o final do século. Como chegar lá?

A complexidade do desafio é esmagadora. Contar com a gradual conscientização dos “torturadores inocentes” parece irrealista. Pagar para ver e apostar na tecnologia como tábua de salvação seria temerário ao extremo. O protagonista da ação, creio eu, deveria ser a estrutura de incentivos: precificar o carbono e colocar a força do sistema de preços para trabalhar no âmbito da descarbonização.

ONU escolhe um brasileiro para ser o ‘fiscal’ da água (O Estado de S. Paulo)

Leo Heller vai substituir a portuguesa Catarina de Albuquerque, criticada por Dilma e Alckmin

O novo relator das Nações Unidas para o Direito à Água e ao Saneamento é o brasileiro Leo Heller. A partir de 2015, ele vai substituir a portuguesa Catarina Albuquerque, que, após dois mandatos, envolveu-se em uma série de crises diplomáticas com Estados brasileiros e com o governo Dilma Rousseff por causa de críticas à gestão de recursos hídricos. Heller foi escolhido pela ONU para atuar como o “fiscal” que vai exigir dos países a garantia da oferta de água e saneamento às suas populações.

O conteúdo na íntegra está disponível em: http://brasil.estadao.com.br/noticias/geral,onu-escolhe-um-brasileiro-para-ser-o-fiscal-da-agua,1588998

(Jamil Chade /O Estado de S. Paulo)

Projeto da biodiversidade vai à comissão geral com polêmicas em aberto (Agência Câmara)

JC 5061, 7 de novembro de 2014

Agronegócio não aceita fiscalização pelo Ibama. Agricultura familiar quer receber pelo cultivo de sementes crioulas. Cientistas criticam regras sobre royalties

A comissão geral que vai discutir na próxima terça-feira as novas regras para exploração do patrimônio genético da biodiversidade brasileira (PL 7735/14) terá o desafio de buscar uma solução para vários impasses que ainda persistem na negociação do texto. Deputados ambientalistas, ligados ao agronegócio e à pesquisa científica continuarão em rodadas de negociação até a terça-feira na busca do projeto mais consensual.

Parte das polêmicas são demandas dos deputados ligados ao agronegócio, que conseguiram incluir as pesquisas da agropecuária no texto substitutivo. A proposta enviada pelo governo excluía a agricultura, que continuaria sendo regulamentada pela Medida Provisória 2.186-16/01. Agora, o texto em discussão já inclui a pesquisa com produção de sementes e melhoramento de raças e revoga de vez a MP de 2001.

O governo já realizou várias reuniões entre parlamentares e técnicos do governo. Até o momento, foram apresentadas três versões diferentes de relatórios.

Fiscalização
O deputado Alceu Moreira (PMDB-RS), que está à frente das negociações, defende que o Ministério da Agricultura seja o responsável pela fiscalização das pesquisas para produção de novas sementes e novas raças. Já o governo quer repassar essa atribuição ao Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis (Ibama). Esse item deverá ser decidido no voto.

“Não vamos permitir que o Ibama, que tem um distanciamento longo da cadeia produtiva, seja o responsável pela fiscalização das pesquisas com agricultura, pecuária e florestas. Terá de ser o Ministério da Agricultura”, afirmou o deputado.

Royalties
O agronegócio também conseguiu incluir no texto tratamento diferenciado para pesquisas com sementes e raças. O pagamento de repartição de benefícios – uma espécie de cobrança deroyalties – só será aplicado para espécies nativas brasileiras. Ficam de fora da cobrança pesquisa com espécies de outros países que são o foco do agronegócio: soja, cana-de açúcar, café.

E quando houver cobrança de royalties, isso incidirá apenas sobre o material reprodutivo – sementes, talos, animais reprodutores ou sêmen – excluindo a cobrança sobre o produto final. “Não pode ter cobrança na origem, que é a semente, e depois outra cobrança no produto final. Se vai ter no produto final, não pode ter na pesquisa”, disse Alceu.

A limitação do pagamento de royalties na agricultura desagradou integrantes da agricultura familiar, que cobram acesso e remuneração pelo cultivo de sementes crioulas, aquelas em que não há alteração genética.

Conselho paritário
Outra demanda do agronegócio é uma composição paritária do Conselho de Gestão do Patrimônio Genético (Cgen) entre representantes do governo federal, da indústria, da academia e da sociedade civil. A intenção é dar mais voz ao agronegócio nesse conselho, que hoje tem apenas representantes do Ministério da Agricultura e da Embrapa.

Cientistas
Já a comunidade científica, segundo a deputada Luciana Santos (PCdoB-PE), que também tem conduzido as negociações, critica o percentual baixo de royalties que será cobrado do fabricante de produto final oriundo de pesquisa com biodiversidade.

O texto prevê o pagamento de 1% da receita líquida anual com o produto, mas esse valor poderá ser reduzido até 0,1%. Também prevê isenção para microempresas, empresas de pequeno porte e microempreendedores individuais.

Os cientistas discordam, ainda, do fato de o projeto escolher apenas a última etapa da cadeia para a cobrança da repartição de benefícios. “Eles acham que é injusto e precisa ser considerado a repartição de benefícios de etapas do processo porque, às vezes, ao final não se comercializa apenas um produto acabado, mas um intermediário”, disse.

Ambientalistas
Os ambientalistas também não decidiram se apoiarão ou não o texto. A decisão será tomada na semana que vem, mas o líder do partido, deputado Sarney Filho (MA), saiu da reunião da última terça-feira (4) insatisfeito com o texto apresentado.

O líder do governo, deputado Henrique Fontana (PT-RS), disse que a intenção é chegar a um texto de consenso após a comissão geral e colocar o tema em votação na quarta-feira (12). Luciana Santos admitiu que, por mais que os deputados tentem chegar a um acordo, vários dispositivos só serão decididos no voto.

Íntegra da proposta:

(Agência Câmara) 

http://www2.camara.leg.br/camaranoticias/noticias/POLITICA/477144-PROJETO-DA-BIODIVERSIDADE-VAI-A-COMISSAO-GERAL-COM-VARIAS-POLEMICAS-EM-ABERTO.html

Cockroach cyborgs use microphones to detect, trace sounds (Science Daily)

Date: November 6, 2014

Source: North Carolina State University

Summary: Researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.


North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster. Credit: Eric Whitmire.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

The researchers have also developed technology that can be used as an “invisible fence” to keep the biobots in the disaster area.

“In a collapsed building, sound is the best way to find survivors,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and senior author of two papers on the work.

The biobots are equipped with electronic backpacks that control the cockroach’s movements. Bozkurt’s research team has created two types of customized backpacks using microphones. One type of biobot has a single microphone that can capture relatively high-resolution sound from any direction to be wirelessly transmitted to first responders.

The second type of biobot is equipped with an array of three directional microphones to detect the direction of the sound. The research team has also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. The system worked well during laboratory testing. Video of a laboratory test of the microphone array system is available athttp://www.youtube.com/watch?v=oJXEPcv-FMw.

“The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter — like people calling for help — from sounds that don’t matter — like a leaking pipe,” Bozkurt says. “Once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.”

A research team led by Dr. Edgar Lobaton has previously shown that biobots can be used to map a disaster area. Funded by National Science Foundation CyberPhysical Systems Program, the long-term goal is for Bozkurt and Lobaton to merge their research efforts to both map disaster areas and pinpoint survivors. The researchers are already working with collaborator Dr. Mihail Sichitiu to develop the next generation of biobot networking and localization technology.

Bozkurt’s team also recently demonstrated technology that creates an invisible fence for keeping biobots in a defined area. This is significant because it can be used to keep biobots at a disaster site, and to keep the biobots within range of each other so that they can be used as a reliable mobile wireless network. This technology could also be used to steer biobots to light sources, so that the miniaturized solar panels on biobot backpacks can be recharged. Video of the invisible fence technology in practice can be seen at http://www.youtube.com/watch?v=mWGAKd7_fAM.

A paper on the microphone sensor research, “Acoustic Sensors for Biobotic Search and Rescue,” was presented Nov. 5 at the IEEE Sensors 2014 conference in Valencia, Spain. Lead author of the paper is Eric Whitmire, a former undergraduate at NC State. The paper was co-authored by Tahmid Latif, a Ph.D. student at NC State, and Bozkurt.

The paper on the invisible fence for biobots, “Towards Fenceless Boundaries for Solar Powered Insect Biobots,” was presented Aug. 28 at the 36th Annual International IEEE EMBS Conference in Chicago, Illinois. Latif was the lead author. Co-authors include Tristan Novak, a graduate student at NC State, Whitmire and Bozkurt.

The research was supported by the National Science Foundation under grant number 1239243.

The Creepy New Wave of the Internet (NY Review of Books)

Sue Halpern

NOVEMBER 20, 2014 ISSUE

The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism
by Jeremy Rifkin
Palgrave Macmillan, 356 pp., $28.00

Enchanted Objects: Design, Human Desire, and the Internet of Things
by David Rose
Scribner, 304 pp., $28.00

Age of Context: Mobile, Sensors, Data and the Future of Privacy
by Robert Scoble and Shel Israel, with a foreword by Marc Benioff
Patrick Brewster, 225 pp., $14.45 (paper)

More Awesome Than Money: Four Boys and Their Heroic Quest to Save Your Privacy from Facebook
by Jim Dwyer
Viking, 374 pp., $27.95

A detail of Penelope Umbrico’s Sunset Portraits from 11,827,282 Flickr Sunsets on 1/7/13, 2013. For the project, Umbrico searched the website Flickr for scenes of sunsets in which the sun, not the subject, predominated. The installation, consisting of two thousand 4 x 6 C-prints, explores the idea that ‘the individual assertion of “being here” is ultimately read as a lack of individuality when faced with so many assertions that are more or less all the same.’ A collection of her work, Penelope Umbrico (photographs), was published in 2011 by Aperture.

Every day a piece of computer code is sent to me by e-mail from a website to which I subscribe called IFTTT. Those letters stand for the phrase “if this then that,” and the code is in the form of a “recipe” that has the power to animate it. Recently, for instance, I chose to enable an IFTTT recipe that read, “if the temperature in my house falls below 45 degrees Fahrenheit, then send me a text message.” It’s a simple command that heralds a significant change in how we will be living our lives when much of the material world is connected—like my thermostat—to the Internet.

It is already possible to buy Internet-enabled light bulbs that turn on when your car signals your home that you are a certain distance away and coffeemakers that sync to the alarm on your phone, as well as WiFi washer-dryers that know you are away and periodically fluff your clothes until you return, and Internet-connected slow cookers, vacuums, and refrigerators. “Check the morning weather, browse the web for recipes, explore your social networks or leave notes for your family—all from the refrigerator door,” reads the ad for one.

Welcome to the beginning of what is being touted as the Internet’s next wave by technologists, investment bankers, research organizations, and the companies that stand to rake in some of an estimated $14.4 trillion by 2022—what they call the Internet of Things (IoT). Cisco Systems, which is one of those companies, and whose CEO came up with that multitrillion-dollar figure, takes it a step further and calls this wave “the Internet of Everything,” which is both aspirational and telling. The writer and social thinker Jeremy Rifkin, whose consulting firm is working with businesses and governments to hurry this new wave along, describes it like this:

The Internet of Things will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the IoT platform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.

In Rifkin’s estimation, all this connectivity will bring on the “Third Industrial Revolution,” poised as he believes it is to not merely redefine our relationship to machines and their relationship to one another, but to overtake and overthrow capitalism once the efficiencies of the Internet of Things undermine the market system, dropping the cost of producing goods to, basically, nothing. His recent book, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, is a paean to this coming epoch.

It is also deeply wishful, as many prospective arguments are, even when they start from fact. And the fact is, the Internet of Things is happening, and happening quickly. Rifkin notes that in 2007 there were ten million sensors of all kinds connected to the Internet, a number he says will increase to 100 trillion by 2030. A lot of these are small radio-frequency identification (RFID) microchips attached to goods as they crisscross the globe, but there are also sensors on vending machines, delivery trucks, cattle and other farm animals, cell phones, cars, weather-monitoring equipment, NFL football helmets, jet engines, and running shoes, among other things, generating data meant to streamline, inform, and increase productivity, often by bypassing human intervention. Additionally, the number of autonomous Internet-connected devices such as cell phones—devices that communicate directly with one another—now doubles every five years, growing from 12.5 billion in 2010 to an estimated 25 billion next year and 50 billion by 2020.

For years, a cohort of technologists, most notably Ray Kurzweil, the writer, inventor, and director of engineering at Google, have been predicting the day when computer intelligence surpasses human intelligence and merges with it in what they call the Singularity. We are not there yet, but a kind of singularity is already upon us as we swallow pills embedded with microscopic computer chips, activated by stomach acids, that will be able to report compliance with our doctor’s orders (or not) directly to our electronic medical records. Then there is the singularity that occurs when we outfit our bodies with “wearable technology” that sends data about our physical activity, heart rate, respiration, and sleep patterns to a database in the cloud as well as to our mobile phones and computers (and to Facebook and our insurance company and our employer).

Cisco Systems, for instance, which is already deep into wearable technology, is working on a platform called “the Connected Athlete” that “turns the athlete’s body into a distributed system of sensors and network intelligence…[so] the athlete becomes more than just a competitor—he or she becomes a Wireless Body Area Network, or WBAN.” Wearable technology, which generated $800 million in 2013, is expected to make nearly twice that this year. These are numbers that not only represent sales, but the public’s acceptance of, and habituation to, becoming one of the things connected to and through the Internet.

One reason that it has been easy to miss the emergence of the Internet of Things, and therefore miss its significance, is that much of what is presented to the public as its avatars seems superfluous and beside the point. An alarm clock that emits the scent of bacon, a glow ball that signals if it is too windy to go out sailing, and an “egg minder” that tells you how many eggs are in your refrigerator no matter where you are in the (Internet-connected) world, revolutionary as they may be, hardly seem the stuff of revolutions; because they are novelties, they obscure what is novel about them.

And then there is the creepiness factor. In the weeks before the general release of Google Glass, Google’s $1,500 see-through eyeglass computer that lets the wearer record what she is seeing and hearing, the press reported a number of incidents in which early adopters were physically accosted by people offended by the product’s intrusiveness. Enough is enough, the Glass opponents were saying.

Why a small cohort of people encountering Google Glass for the first time found it disturbing is the same reason that David Rose, an instructor at MIT and the founder of a company that embeds Internet connectivity into everyday devices like umbrellas and medicine vials, celebrates it and waxes nearly poetic on the potential of “heads up displays.” As he writes in Enchanted Objects: Design, Human Desire, and the Internet of Things, such devices have the potential to radically transform human encounters. Rose imagines a party where

Wearing your fashionable [heads up] display, you will instruct the device to display the people’s names and key biographical info above their heads. In the business meeting, you will call up information about previous meetings and agenda items. The HUD display will call up useful websites, tap into social networks, and dig into massive info sources…. You will fact-check your friends and colleagues…. You will also engage in real-time messaging, including videoconferencing with friends or colleagues who will participate, coach, consult, or lurk.
Whether this scenario excites or repels you, it represents the vision of more than one of the players moving us in the direction of pervasive connectivity. Rose’s company, Ambient Devices, has been at the forefront of what he calls “enchanting” objects—that is, connecting them to the Internet to make them “extraordinary.” This is a task that Glenn Lurie, the CEO of ATT Mobility, believes is “spot on.” Among these enchanted objects are the Google Latitude Doorbell that “lets you know where your family members are and when they are approaching home,” an umbrella that turns blue when it is about to rain so you might be inspired to take it with you, and a jacket that gives you a hug every time someone likes your Facebook post.

Rose envisions “an enchanted wall in your kitchen that could display, through lines of colored light, the trends and patterns of your loved ones’ moods,” because it will offer “a better understanding of [the] hidden thoughts and emotions that are relevant to us….” If his account of a mood wall seems unduly fanciful (and nutty), it should be noted that this summer, British Airways gave passengers flying from New York to London blankets embedded with neurosensors to track how they were feeling. Apparently this was more scientific than simply asking them. According to one report:

When the fiber optics woven into the blanket turned red, flight attendants knew that the passengers were feeling stressed and anxious. Blue blankets were a sign that the passenger was feeling calm and relaxed.
Thus the airline learned that passengers were happiest when eating and drinking, and most relaxed when sleeping.

While, arguably, this “finding” is as trivial as an umbrella that turns blue when it’s going to rain, there is nothing trivial about collecting personal data, as innocuous as that data may seem. It takes very little imagination to foresee how the kitchen mood wall could lead to advertisements for antidepressants that follow you around the Web, or trigger an alert to your employer, or show up on your Facebook page because, according to Robert Scoble and Shel Israel in Age of Context: Mobile, Sensors, Data and the Future of Privacy, Facebook “wants to build a system that anticipates your needs.”

It takes even less imagination to foresee how information about your comings and goings obtained from the Google Latitude Doorbell could be used in a court of law. Cars are now outfitted with scores of sensors, including ones in the seats that determine how many passengers are in them, as well as with an “event data recorder” (EDR), which is the automobile equivalent of an airplane’s black box. As Scoble and Israel report in Age of Context, “the general legal consensus is that police will be able to subpoena car logs the same way they now subpoena phone records.”

Meanwhile, cars themselves are becoming computers on wheels, with operating system updates coming wirelessly over the air, and with increasing capacity to “understand” their owners. As Scoble and Israel tell it:

They not only adjust seat positions and mirrors automatically, but soon they’ll also know your preferences in music, service stations, dining spots and hotels…. They know when you are headed home, and soon they’ll be able to remind you to stop at the market to get a dessert for dinner.
Recent revelations from the journalist Glenn Greenwald put the number of Americans under government surveillance at a colossal 1.2 million people. Once the Internet of Things is in place, that number might easily expand to include everyone else, because a system that can remind you to stop at the market for dessert is a system that knows who you are and where you are and what you’ve been doing and with whom you’ve been doing it. And this is information we give out freely, or unwittingly, and largely without question or complaint, trading it for convenience, or what passes for convenience.

halpern_2-112014.jpg
Michael Cogliantry
The journalist A.J. Jacobs wearing data-collecting sensors to keep track of his health and fitness; from Rick Smolan and Jennifer Erwitt’s The Human Face of Big Data, published in 2012 by Against All Odds
In other words, as human behavior is tracked and merchandized on a massive scale, the Internet of Things creates the perfect conditions to bolster and expand the surveillance state. In the world of the Internet of Things, your car, your heating system, your refrigerator, your fitness apps, your credit card, your television set, your window shades, your scale, your medications, your camera, your heart rate monitor, your electric toothbrush, and your washing machine—to say nothing of your phone—generate a continuous stream of data that resides largely out of reach of the individual but not of those willing to pay for it or in other ways commandeer it.

That is the point: the Internet of Things is about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.” Lots and lots of it. “The more you tell the world about yourself, the more the world can give you what you want,” says Sam Lessin, the head of Facebook’s Identity Product Group. It’s a sentiment shared by Scoble and Israel, who write:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you. In the vast majority of cases, we believe the coming benefits are worth that trade-off.
So, too, does Jeremy Rifkin, who dismisses our legal, social, and cultural affinity for privacy as, essentially, a bourgeois affectation—a remnant of the enclosure laws that spawned capitalism:

Connecting everyone and everything in a neural network brings the human race out of the age of privacy, a defining characteristic of modernity, and into the era of transparency. While privacy has long been considered a fundamental right, it has never been an inherent right. Indeed, for all of human history, until the modern era, life was lived more or less publicly….
In virtually every society that we know of before the modern era, people bathed together in public, often urinated and defecated in public, ate at communal tables, frequently engaged in sexual intimacy in public, and slept huddled together en masse. It wasn’t until the early capitalist era that people began to retreat behind locked doors.
As anyone who has spent any time on Facebook knows, transparency is a fiction—literally. Social media is about presenting a curated self; it is opacity masquerading as transparency. In a sense, then, it is about preserving privacy. So when Rifkin claims that for young people, “privacy has lost much of its appeal,” he is either confusing sharing (as in sharing pictures of a vacation in Spain) with openness, or he is acknowledging that young people, especially, have become inured to the trade-offs they are making to use services like Facebook. (But they are not completely inured to it, as demonstrated by both Jim Dwyer’s painstaking book More Awesome Than Money, about the failed race to build a noncommercial social media site called Diaspora in 2010, as well as the overwhelming response—as many as 31,000 requests an hour for invitations—to the recent announcement that there soon will be a Facebook alternative, Ello, that does not collect or sell users’ data.)

These trade-offs will only increase as the quotidian becomes digitized, leaving fewer and fewer opportunities to opt out. It’s one thing to edit the self that is broadcast on Facebook and Twitter, but the Internet of Things, which knows our viewing habits, grooming rituals, medical histories, and more, allows no such interventions—unless it is our behaviors and curiosities and idiosyncracies themselves that end up on the cutting room floor.

Even so, no matter what we do, the ubiquity of the Internet of Things is putting us squarely in the path of hackers, who will have almost unlimited portals into our digital lives. When, last winter, cybercriminals broke into more than 100,000 Internet-enabled appliances including refrigerators and sent out 750,000 spam e-mails to their users, they demonstrated just how vulnerable Internet-connected machines are.

Not long after that, Forbes reported that security researchers had come up with a $20 tool that was able to remotely control a car’s steering, brakes, acceleration, locks, and lights. It was an experiment that, again, showed how simple it is to manipulate and sabotage the smartest of machines, even though—but really because—a car is now, in the words of a Ford executive, a “cognitive device.”

More recently, a study of ten popular IoT devices by the computer company Hewlett-Packard uncovered a total of 250 security flaws among them. As Jerry Michalski, a former tech industry analyst and founder of the REX think tank, observed in a recent Pew study: “Most of the devices exposed on the internet will be vulnerable. They will also be prone to unintended consequences: they will do things nobody designed for beforehand, most of which will be undesirable.”

Breaking into a home system so that the refrigerator will send out spam that will flood your e-mail and hacking a car to trigger a crash are, of course, terrible and real possibilities, yet as bad as they may be, they are limited in scope. As IoT technology is adopted in manufacturing, logistics, and energy generation and distribution, the vulnerabilities do not have to scale up for the stakes to soar. In a New York Times article last year, Matthew Wald wrote:

If an adversary lands a knockout blow [to the energy grid]…it could black out vast areas of the continent for weeks; interrupt supplies of water, gasoline, diesel fuel and fresh food; shut down communications; and create disruptions of a scale that was only hinted at by Hurricane Sandy and the attacks of Sept. 11.
In that same article, Wald noted that though government officials, law enforcement personnel, National Guard members, and utility workers had been brought together to go through a worst-case scenario practice drill, they often seemed to be speaking different languages, which did not bode well for an effective response to what is recognized as a near inevitability. (Last year the Department of Homeland Security responded to 256 cyberattacks, half of them directed at the electrical grid. This was double the number for 2012.)

This Babel problem dogs the whole Internet of Things venture. After the “things” are connected to the Internet, they need to communicate with one another: your smart TV to your smart light bulbs to your smart door locks to your smart socks (yes, they exist). And if there is no lingua franca—which there isn’t so far—then when that television breaks or becomes obsolete (because soon enough there will be an even smarter one), your choices will be limited by what language is connecting all your stuff. Though there are industry groups trying to unify the platform, in September Apple offered a glimpse of how the Internet of Things actually might play out, when it introduced the company’s new smart watch, mobile payment system, health apps, and other, seemingly random, additions to its product line. As Mat Honan virtually shouted in Wired:

Apple is building a world in which there is a computer in your every interaction, waking and sleeping. A computer in your pocket. A computer on your body. A computer paying for all your purchases. A computer opening your hotel room door. A computer monitoring your movements as you walk though the mall. A computer watching you sleep. A computer controlling the devices in your home. A computer that tells you where you parked. A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned—and sharing it all with your friends…. THIS IS THE NEW APPLE ECOSYSTEM. APPLE HAS TURNED OUR WORLD INTO ONE BIG UBIQUITOUS COMPUTER.
The ecosystem may be lush, but it will be, by design, limited. Call it the Internet of Proprietary Things.

For many of us, it is difficult to imagine smart watches and WiFi-enabled light bulbs leading to a new world order, whether that new world order is a surveillance state that knows more about us than we do about ourselves or the techno-utopia envisioned by Jeremy Rifkin, where people can make much of what they need on 3-D printers powered by solar panels and unleashed human creativity. Because home automation is likely to be expensive—it will take a lot of eggs before the egg minder pays for itself—it is unlikely that those watches and light bulbs will be the primary driver of the Internet of Things, though they will be its showcase.

Rather, the Internet’s third wave will be propelled by businesses that are able to rationalize their operations by replacing people with machines, using sensors to simplify distribution patterns and reduce inventories, deploying algorithms that eliminate human error, and so on. Those business savings are crucial to Rifkin’s vision of the Third Industrial Revolution, not simply because they have the potential to bring down the price of consumer goods, but because, for the first time, a central tenet of capitalism—that increased productivity requires increased human labor—will no longer hold. And once productivity is unmoored from labor, he argues, capitalism will not be able to support itself, either ideologically or practically.

What will rise in place of capitalism is what Rifkin calls the “collaborative commons,” where goods and property are shared, and the distinction between those who own the means of production and those who are beholden to those who own the means of production disappears. “The old paradigm of owners and workers, and of sellers and consumers, is beginning to break down,” he writes.

Consumers are becoming their own producers, eliminating the distinction. Prosumers will increasingly be able to produce, consume, and share their own goods…. The automation of work is already beginning to free up human labor to migrate to the evolving social economy…. The Internet of Things frees human beings from the market economy to pursue nonmaterial shared interests on the Collaborative Commons.
Rifkin’s vision that people will occupy themselves with more fulfilling activities like making music and self-publishing novels once they are freed from work, while machines do the heavy lifting, is offered at a moment when a new kind of structural unemployment born of robotics, big data, and artificial intelligence takes hold globally, and traditional ways of making a living disappear. Rifkin’s claims may be comforting, but they are illusory and misleading. (We’ve also heard this before, in 1845, when Marx wrote in The German Ideology that under communism people would be “free to hunt in the morning, fish in the afternoon, rear cattle in the evening, [and] criticize after dinner.”)

As an example, Rifkin points to Etsy, the online marketplace where thousands of “prosumers” sell their crafts, as a model for what he dubs the new creative economy. “Currently 900,000 small producers of goods advertise at no cost on the Etsy website,” he writes.

Nearly 60 million consumers per month from around the world browse the website, often interacting personally with suppliers…. This form of laterally scaled marketing puts the small enterprise on a level playing field with the big boys, allowing them to reach a worldwide user market at a fraction of the cost.
All that may be accurate and yet largely irrelevant if the goal is for those 900,000 small producers to make an actual living. As Amanda Hess wrote last year in Slate:

Etsy says its crafters are “thinking and acting like entrepreneurs,” but they’re not thinking or acting like very effective ones. Seventy-four percent of Etsy sellers consider their shop a “business,” including 65 percent of sellers who made less than $100 last year.
While it is true that a do-it-yourself subculture is thriving, and sharing cars, tools, houses, and other property is becoming more common, it is also true that much of this activity is happening under duress as steady employment disappears. As an article in The New York Times this past summer made clear, employment in the sharing economy, also known as the gig economy, where people piece together an income by driving for Uber and delivering groceries for Instacart, leaves them little time for hunting and fishing, unless it’s hunting for work and fishing under a shared couch for loose change.

So here comes the Internet’s Third Wave. In its wake jobs will disappear, work will morph, and a lot of money will be made by the companies, consultants, and investment banks that saw it coming. Privacy will disappear, too, and our intimate spaces will become advertising platforms—last December Google sent a letter to the SEC explaining how it might run ads on home appliances—and we may be too busy trying to get our toaster to communicate with our bathroom scale to notice. Technology, which allows us to augment and extend our native capabilities, tends to evolve haphazardly, and the future that is imagined for it—good or bad—is almost always historical, which is to say, naive.

Denying problems when we don’t like the political solutions (Duke University)

6-Nov-2014

Steve Hartsoe

Duke study sheds light on why conservatives, liberals disagree so vehemently

DURHAM, N.C. — There may be a scientific answer for why conservatives and liberals disagree so vehemently over the existence of issues like climate change and specific types of crime.

A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don’t, then they tend to deny the problem even exists.

“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke’s Fuqua School of Business. “The cure can be more immediately threatening than the problem.”

The study, “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief,” appears in the November issue of the Journal of Personality and Social Psychology (viewable athttp://psycnet.apa.org/journals/psp/107/5/809/).

The researchers conducted three experiments (with samples ranging from 120 to 188 participants) on three different issues — climate change, air pollution that harms lungs, and crime.

“The goal was to test, in a scientifically controlled manner, the question: Does the desirability of a solution affect beliefs in the existence of the associated problem? In other words, does what we call ‘solution aversion’ exist?” Campbell said.

“We found the answer is yes. And we found it occurs in response to some of the most common solutions for popularly discussed problems.”

For climate change, the researchers conducted an experiment to examine why more Republicans than Democrats seem to deny its existence, despite strong scientific evidence that supports it.

One explanation, they found, may have more to do with conservatives’ general opposition to the most popular solution — increasing government regulation — than with any difference in fear of the climate change problem itself, as some have proposed.

Participants in the experiment, including both self-identified Republicans and Democrats, read a statement asserting that global temperatures will rise 3.2 degrees in the 21st century. They were then asked to evaluate a proposed policy solution to address the warming.

When the policy solution emphasized a tax on carbon emissions or some other form of government regulation, which is generally opposed by Republican ideology, only 22 percent of Republicans said they believed the temperatures would rise at least as much as indicated by the scientific statement they read.

But when the proposed policy solution emphasized the free market, such as with innovative green technology, 55 percent of Republicans agreed with the scientific statement.

For Democrats, the same experiment recorded no difference in their belief, regardless of the proposed solution to climate change.

“Recognizing this effect is helpful because it allows researchers to predict not just what problems people will deny, but who will likely deny each problem,” said co-author Aaron Kay, an associate professor at Fuqua. “The more threatening a solution is to a person, the more likely that person is to deny the problem.”

The researchers found liberal-leaning individuals exhibited a similar aversion to solutions they viewed as politically undesirable in an experiment involving violent home break-ins. When the proposed solution called for looser versus tighter gun-control laws, those with more liberal gun-control ideologies were more likely to downplay the frequency of violent home break-ins.

“We should not just view some people or group as anti-science, anti-fact or hyper-scared of any problems,” Kay said. “Instead, we should understand that certain problems have particular solutions that threaten some people and groups more than others. When we realize this, we understand those who deny the problem more and we improve our ability to better communicate with them.”

Campbell added that solution aversion can help explain why political divides become so divisive and intractable.

“We argue that the political divide over many issues is just that, it’s political,” Campbell said. “These divides are not explained by just one party being more anti-science, but the fact that in general people deny facts that threaten their ideologies, left, right or center.”

The researchers noted there are additional factors that can influence how people see the policy implications of science. Additional research using larger samples and more specific methods would provide an even clearer picture, they said.

###

The study was funded by The Fuqua School of Business.

CITATION: Troy Campbell, Aaron Kay, Duke University (2014). “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief.” Journal of Personality and Social Psychology, 107(5), 809-824.http://dx.doi.org/10.1037/a0037963

G20: Australia resists international call supporting climate change fund (The Guardian)

Exclusive: Europe and the US argue strongly that leaders should back the need for contributions to the Green Climate Fund, which helps poorer countries prepare for climate change

theguardian.com, Friday 7 November 2014 00.51 GMT

tony abbottAustralia’s original position was that the G20 meeting should focus solely on economic issues. Photograph: Lukas Coch/AAP

Australia is resisting a last-ditch push by the US, France and other European countries for G20 leaders at next week’s meeting in Brisbane to back contributions to the Green Climate Fund.

The prime minister has previously rejected the fund as a “Bob Brown bank on an international scale” – referring to the former leader of the Australian Greens.

The Green Climate Fund aims to help poorer countries cut their emissions and prepare for the impact of climate change, and is seen as critical to securing developing-nation support for a successful deal on reducing emissions at the United Nations meeting in Paris next year.

The US and European Union nations are also lobbying for G20 leaders to promise that post-2020 greenhouse emission reduction targets will be unveiled early, to improve the chances of a deal in Paris, but Australia is also understood to be resisting this.

As reported by Guardian Australia, Australia has reluctantly conceded the final G20 communique should include climate change as a single paragraph, acknowledging that it should be addressed by UN processes. Australia’s original position was that the meeting should focus solely on “economic issues”.

The text that has so far made it through the G20’s closed-door, consensus-driven process is very general, and reads as follows:

“We support strong and effective action to address climate change, consistent with sustainable economic growth and certainty for business and investment. We reaffirm our resolve to adopt a protocol, another legal instrument or an agreed outcome with legal force under the United Nations Framework Convention on Climate Change that is applicable to all parties at the 21st Conference of the Parties in Paris in 2015.”

Australia had previously insisted the G20 should discuss climate-related issues only as part of its deliberations on energy efficiency, but the energy efficiency action plan to be agreed at the meeting, revealed by Guardian Australia, does not require G20 leaders to commit to any actual action.

Instead it asks them to “consider” making promises next year to reduce the energy used by smartphones and computers and to develop tougher standards for car emissions.

But as the negotiations on the G20 communique reach their final stages, European nations and the US continue to argue strongly that leaders should back the need for contributions to the Green Climate Fund.

More than $2.8bn has been pledged to the fund so far – including $1bn by France and almost $1bn by Germany. More pledges are expected at a special conference in Berlin on 20 November. The UK has said it will make a “strong” contribution at that meeting.

It is understood the Department of Foreign Affairs and Trade, which leads Australia’s negotiating position, is considering whether Australia should make a pledge.

Asked about the fund before last year’s UN meeting, the prime minister said “we’re not going to be making any contributions to that”. It was reported that at one of its first cabinet meetings the Abbott government decided it would make no contributions to a fund that was described as “socialism masquerading as environmentalism”.

The government also pointedly dissented from support for the fund in a communique from last November’s Commonwealth Heads of Government meeting – a stance backed by Canada.

Abbott told the Australian newspaper at the time; “One thing the current government will never do is say one thing at home and a different thing abroad. We are committed to dismantling the Bob Brown bank [the Clean Energy Finance Corporation] at home so it would be impossible for us to support a Bob Brown bank on an international scale.”

*   *   *

Playing whack-a-mole with Australian adviser’s climate change myths (The Guardian)

Maurice Newman, business adviser to Australia’s prime minister, pops up with a litany of climate change myths and misrepresentations

Maurice Newman, the climate science denying business advisor to Australia's Prime Minister Tony Abbott.Maurice Newman, the climate science denying business adviser to Australia’s prime minister Tony Abbott. Photograph: Daniel Munoz/Reuters

Reading opinion columns from Australian prime minister Tony Abbott’s top business adviser Maurice Newman reminds me of those fairground whack-a-mole games.

You smash those cartoonish mammals over their fibreglass heads with a big rubber hammer as they emerge from little round holes, yet these little subterranean mammals never know they’re beat and just come up with the same grin somewhere else.

In this climate denialist version of whack-a-mole, the mammal is replaced with Newman’s upper torso clutching the latest truthy climate factoid he has plucked indiscriminately from the intertubes.

Just like at the fairground, when you whack-a-Maurice, he just keeps popping up with another myth.

The latest version of whack-a-Maurice comes in his new opinion column in The Australian newspaper, headlined “Inconvenient truths ignored by the climate propaganda machine”.

In the article, Newman attacks renewable energy, the IPCC, Australian Greens leader Christine Milne and climate science in general while telling us that coal is cheap and reliable and that we should put our self-interest in selling that coal above all else.

Newman misrepresents the latest IPCC study, misquotes experts, pushes debunked studies, claims the Scottish Government commissioned a report that it likely never actually commissioned and rounds off by putting his faith in an internet poll that was gamed by climate sceptics.

So join me for a game of Whack-a-Maurice®.

Whack time

In the article, Newman starts with three statements about energy prices and how renewable energy projects apparently “destroy jobs” and have been terribly bad news for places that have embraced progressive policies to encourage renewable energy.

Newman writes:

Clearly [Greens Leader Christine] Milne is unaware of the cost to California, Europe and Britain of their ultra green embrace.

The Golden State’s energy prices are 40 per cent above the US national average, plunging its manufacturing and agricultural regions into depression, with one in five living in poverty.

OK. While it’s true that Californians do have comparatively expensive electricity costs, they actually have among the lowest average electricity bills across the whole of the United States.

This appears due to a combination of the state’s mild climate and its aggressive energy efficiency scheme.

California does have a renewable energy target – recently expanded to push the state to get 33 per cent of its power from renewables by 2020.

The Lawrence Berkley National Laboratory in the US has studied the impact of renewable target schemes (known there as Renewable Portfolio Standards) in place across the US.

The 2007 Berkeley study (carried out before California upped its target) found these RPS schemes added an average of about 38c per month (about one quarter the cost of one take away coffee) to electricity bills. California’s scheme was among those having the lowest impact on bills.

In 2012, Californian’s had an electricity bill of about $87 per month.

Apparently, in Newman’s razor sharp climate policy mind, it is imposts like a 38c per month rise in electricity prices that is “plunging” the state’s agriculture industry into depression, rather than, say, one of California’s worst droughts in living memory.

El whack

So now to Spain. Newman writes:

Researchers at Spain’s King Juan Carlos University have found renewable energy programs destroyed 2.2 jobs for every green one created.

Newman is referring to a report titled: “Study of the effects on employment of public aid to renewable energy sources” that was published in 2009 and written by Gabriel Calzada Alvarez.

Alvarez is an associate professor at King Juan Carlos University, but Newman doesn’t mention that the study was actually co-commissioned by the “libertarian” think tank The Instituto Juan de Maria that Alvarez founded.

Alvarez has also presented at a Heartland Institute climate conference for “sceptics” and his institute has been a sponsor of one of those conferences.

Who were the other commissioning group?

This was the Institute for Energy Research, a US-based thinktank with strong links to the US Koch brothers, whose foundations have given about $175,000 to the think tank and funnelled millions into anti-climate action projects at similar think tanks. The IER recently claimed Alvarez’s study as its own.

But was the study any good?

The US Department of Energy’s National Renewable Energy Laboratorytook a studied look at it and, to put it mildly, tore the thing to bits. Here’s some of the choice parts of their critique

The analysis by the authors from King Juan Carlos University represents a significant divergence from traditional methodologies used to estimate employment impacts from renewable energy. In fact, the methodology does not reflect an employment impact analysis. Accordingly, the primary conclusion made by the authors – policy support of renewable energy results in net jobs losses – is not supported by their work …

Additionally, this analysis has oversimplifications and assumptions that lead to questions regarding its quantitative results. Finally, the authors fail to justify their implication that because of the jobs comparison, subsidies for renewables are not worthwhile. This ignores an array of benefits besides employment creation that flow from government investment in renewable energy technologies.

The Alvarez study came in for similar harsh criticism in Spain, as noted here on a blog from the US Natural Resources Defense Council.

Scots whack

And so now to whack-a-Maurice in Scotland. Newman writes:

A study by Verso Economics commissioned by the Scottish government concluded that for every job in the wind industry, 3.7 jobs were lost elsewhere.

Verso Economics? Commissioned by the Scottish government? Sounds impressive.

Indeed, when the report was published in March 2011, it was given extensive coverage in Scotland. And what did the Scottish government make of the study? A BBC report tells us the government’s view.

This report is misleading.

Does it seem odd that the Scottish government should condemn it’s own report?

Perhaps one reason is that there appears to be no evidence that the Scottish government actually commissioned the report that Maurice Newman says it commissioned (I have asked the author, an economist called Richard Marsh, for clarification, but the report itself makes no mention of a commission from the Government and, when Marsh gave evidence to a Scottish parliamentary committee the following year in relation to the report, he didn’t mention a government commission then either).

Verso Economics appears to have been a very small firm with only two employees, Marsh being one of them. This doesn’t necessarily make the arguments wrong, but it is curious that Newman would choose to use the name of a tiny consultancy once based in Kirkaldy that no loner exists (Marsh is now at another firm).

Where’s the next mole?

Whacking scientists

Newman has a crack at former Australian chief scientist Penny Sackett who, according to Newman, had said in 2009 we had only five years to avoid “dangerous global warming”.

Plainly if you read Sackett’s words from 2009, she was talking about a time frame to start radically reducing emissions to prevent “dangerous global warming” down the track and not, as Newman implies, a time by which we should all be frying in our own juices.

Equally, Newman brings up the bête noire of all Australian climate science denialists, Tim Flannery. Newman writes:

When climate commissioner Tim Flannery said that “even the rain that falls isn’t actually going to fill our dams and river systems”, it was sobering, but soon we were donating to flood victims and -suspected he’d dreamt it up to scare us.

Again, Newman ignores the fact that Flannery was not talking about the present, but referring to a time decades into the future if emissions remained on their current path.

Whack. Next mole?

Whacking the IPCC

Newman claims that “temperatures have gone nowhere for 18 years” while ignoring that in those 18 years the world has experienced the hottest decade on record. According to the World Meteorological Organisation, 13 of the 14 warmest years on record have all occurred since 2000.

If Newman thinks warming has stopped, why is it that between 2002 and 2011, the two main ice sheets of Antarctica and Greenland were melting at a rate of about 362 billion tonnes of ice a year – an almost six-fold increase in the rate for the previous decade?

Newman keeps popping up like our proverbial mole with a litany of myths.

He says the recent IPCC Synthesis Report “fails to mention” that the extent of Antarctic sea ice is the highest since records began.

The reason it fails to mention this, is that the record was broken well after the underlying reports were finished.

But yet, the Synthesis Report does mention the increase in Antarctic sea ice extent (read my post What’s going on with global warming and Antarctica’s growing sea ice? for more on this).

Next.

Bleak whacking

Newman writes:

In painting the bleakest picture they can, IPCC authors have projected CO2 levels reaching 1000 parts per million in 2100, largely through coal combustion…

Newman wants us to think that the IPCC authors are a bunch of doom merchants, and so ignores the fact that the IPCC report makes a range of projections for the future concentration of carbon dioxide in the atmosphere.

As well as the admittedly “bleak” scenario of CO2 levels reaching 1000 parts per million in the atmosphere by the end of this century (delivering something like 4C of global warming), the report also projects CO2 levels at 720ppm, 580ppm, 530ppm and 480ppm.

Newman would have struggled to have missed this, given they all appear on the same chart.

We could go on an on whacking Maurice Newman’s climate denialist moles, and his column has several others, but at some point we have to stop.

Sciencey internet polls

But not before we dwell on Newman’s closing argument that 91 per cent of people think the IPCC is wrong that we’re heading for 4C of global warming.

I think you’ll agree that Newman’s source for this is beyond reproach. It’s one of those really sciencey internet polls carried out by the ABC.

So sciencey was the survey, that climate science denialist groups from Australia to the US were telling supporters to visit the poll.

This is when we have to remind ourselves that Maurice Newman is the chairman of prime minister Tony Abbott’s Business Advisory Council, handpicked by Abbott himself.

As Maurice Newman himself concluded, “Enough said”.

The new GOP Senate is already gearing up to cause climate mayhem (Grist)

 By

On Tuesday night, Republicans won big: They picked up governorships in blue states like Maryland, Massachusetts, and Illinois, and they held House seats in competitive districts with embarrassing incumbents like Michael Grimm of New York, who physically threatened a reporter and is under indictment for tax evasion.

But their biggest win by far was taking control of the U.S. Senate. As of this writing, Republicans had already secured 52 Senate seats, thanks to knocking off Democratic incumbents or replacing retiring Democrats in Arkansas, Colorado, Iowa, Montana, North Carolina, South Dakota, and West Virginia. Another GOP pick up is probable in Alaska, and Republican Rep. Bill Cassidy is likely to win the runoff in Louisiana against Sen. Mary Landrieu in December.

This is not good news for the climate. The party that controls the majority and the committee chairmanships controls the agenda. Sen. Mitch McConnell (R-Ky.) will now be the majority leader. McConnell deflects questions about whether he accepts climate science by saying he isn’t a scientist and citing climate-denying conservative pundit George Will. But he is clear about where he stands on fossil fuels, especially coal: He loves them. Attacking President Obama for not sharing his passion for burning carbon was central to McConnell’s reelection campaign this year. If you thought Landrieu, chair of the Senate Energy and Commerce Committee, was too pro–fossil fuel, just wait until Republican Lisa Murkowski of Alaska takes the gavel. Leading climate denier James Inhofe of Oklahoma will be taking over the Senate Environment and Public Works Committee, and fellow denier Ted Cruz (R-Texas) will be chairing the Committee on Science and Technology.

The Republicans have two top energy-related demands: stop EPA from regulating CO2 and approve the Keystone XL pipeline.

The EPA is required under the Clean Air Act to regulate greenhouse gases (GHGs) as pollutants. So the agency proposed regulations of CO2 emissions from power plants. This is the centerpiece of what Republicans inaccurately call Obama’s “War on Coal.”

In the House, Republicans have voted to strip the EPA of its authority to regulate GHGs. That measure died in the Senate because of Democratic opposition. Sen. Susan Collins of Maine was the only Senate Republican to vote against it, and even she had voted once previously to revoke the EPA’s GHG regulatory authority. Obama has staked his second-term legacy on reducing GHG emissions, in large part through the EPA power plant regulations. Those regulations are also essentialto setting the U.S. on a path to meet its promised emissions reductions under the Copenhagen Accord of 2009. Obama will not let congressional Republicans make him look like a feckless liar to our allies, whose cooperation we need to get a more ambitious climate agreement in the 2015 round of negotiations in Paris. So Obama will make a stand on EPA authority if he must. And before it even comes to that, Senate Democrats will likely throttle any EPA authority repeal with a filibuster.

Keystone is more vulnerable. Many Democrats from fossil fuel-dependent states have called for its approval. As former Republican Speaker of the House Newt Gingrich said on CNN Tuesday night, “This Republican House and Republican Senate will pass the Keystone XL pipeline.” Speaking on Fox News the same night, GOP Dark Overlord Karl Rove said that Republicans would look to pass legislation that could get Democratic votes and cited Keystone XL as an example.

Republicans won’t just pass Keystone approval on its own for Obama to veto. They will continue their strategy of attaching it to unrelated bills, from anodyne energy-efficiency measures to the budget. No one really knows what Obama thinks about Keystone, but it is widely assumed that he was happy to let it go through until activists rose up in protest. Obama would probably like to mollify his base after the midterms by rejecting Keystone, but there’s no guarantee he won’t be willing to trade it away with newly empowered Republicans.

Anticipating exactly this line of dispiriting thinking, 350.org, which has led the national fight against Keystone, issued a statement Tuesday night defensively titled, “Keystone XL No Done Deal.” “We know the Republicans are going to make Keystone a priority, but this isn’t their call,” said May Boeve, executive director of 350.org. “President Obama has the power to reject the Keystone pipeline outright, and do right by his own legacy. We’re gearing up to hold his feet to the fire — and we’re confident that when everything’s said and done, Keystone XL will not be built.”

Republicans know they may need to force Obama’s hand on Keystone precisely because of the pressure he will get from his base to reject it. And they will try to do just that. With control of both houses of Congress, Republicans can pass any bill they want unless Senate Democrats threaten a filibuster. That doesn’t mean Republicans can enact any law they want. Obama can veto their bills, and now his little-used veto pen will be put to work. But Obama can’t simply prevent the GOP from doing anything at all. Some legislation has to get passed just to keep the government running, such as approving a budget and raising the debt ceiling. Ever since Republicans won control of the House in 2010, they have been exploiting those requirements to try to force Obama to sign off on their agenda. Now, with control of the Senate, Republicans will be in a stronger position to demand that Obama give in to or compromise on some of their demands. If he doesn’t, they can cause a government shutdown, or trigger a global financial collapse by breaching the debt ceiling and defaulting on the U.S. national debt.

I know what you’re thinking: “But shutting down the government or defaulting on our debt would be terrible for America!” Don’t be so naive as to mistake congressional Republicans for rational human beings or patriotic Americans. They are so beholden to their base that taking the U.S. economy hostage has become a standard GOP negotiating tactic. Since swing voters frown on such shenanigans, House Majority Leader Kevin McCarthy (Calif.) has said he would like to end the hostile budget fights and get down to governing. But the Republican leadership always wants that, and in the past they’ve always given in to their rowdy backbenchers.

If they can’t get their way through normal legislative means, Republicans might simply try to disable the government by blocking all of President Obama’s nominees until he gives in to a major demand like Keystone approval. They were already doing that by filibustering even moderate, well-qualified nominees until Democrats eliminated the filibuster for executive branch appointees. Now, with a Senate majority, Republicans can block any nominee.

To see what else Senate Republicans have in store for the environment, just look at what their House colleagues have tried to do. Earlier this year, House Republicans passed a series of bills to kneecap federal agencies like the EPA. The details are boring and complicated, but the bottom line is that they would institute a number of requirements to burden or constrict the regulatory process. A typical example is their proposal to require agencies to calculate all the indirect costs of every regulation and always choose the least costly option, regardless of its adverse impact on, say, human health. Another example: In September, they passed a bill that would stop the EPA and Army Corps of Engineers from protecting America’s small streams and wetlands.

Republicans will also try to prevent environmental regulation by refusing to pay for it. In a typical measure, the House GOP’s EPA budget passed in June would have cut funding for the agency by 9 percent. House Republicans have previously votedto defund the Intergovernmental Panel on Climate Change, which compiles reportson climate science, and the U.N. Framework Convention on Climate Change, the body that hosts international climate negotiations. Now the Senate may join them. Just through controlling the House, Republicans have already forced through milder cuts to the EPA budget and blocked environmental regulations. With control of the Senate, Obama will have to cede even more ground.

In fairness to the GOP, elections have consequences and Obama should have to compromise with them. That Republicans lack any actual popular majority — they won because of the rural bias of the Senate and gerrymandering of House districts — is irrelevant. When Republicans claim they have a popular mandate, they are lying, and should be called out for it. But when they say, “We won and we’re going to use our power to enact our agenda,” it’s all in the game.

And so you can expect to see a lot of little bits of bad news for the climate and the broader environment in the budget negotiation process. EPA funding will be cut, presumably by somewhere between the roughly stable funding Obama will likely request and the drastic cuts the House GOP will pass. Programs that especially irk Republicans, like those that promote renewable energy and anything pertaining to smart growth, will fare especially poorly. There will also be spending cuts in other departments with environmental implications, like mass transit and transit-oriented affordable-housing development.

In terms of Senate election results, the worst of it is over. The map of states with Senate seats up in 2016 is a lot more favorable to Democrats, and they will stand a good chance of regaining the majority. But in terms of environmental policy, the worst is yet to come.

*   *   *

Climate change denier Jim Inhofe in line for Senate’s top environmental job (The Guardian)

Obama faces a fight to protect his climate change agenda after midterm results suggest Senate’s top environmental post will fall to Republican stalwart of climate denial

theguardian.com, Thursday 6 November 2014 16.24 GMT

Climate skeptic nad Republican Senator Jim InhofeRepublican Senator Jim Inhofe is expected to get the Senate top environmental job. Photograph: Tom Williams/Getty Images

The Senate’s top environmental job is set to fall to Jim Inhofe, one of the biggest names in US climate denial, but campaigners say Barack Obama will fight to protect his global warming agenda.

Oklahoma Republican Inhofe has been denying the science behind climate change for 20 years – long before it became a cause for the conservative tea party wing. Following midterm elections which saw the Republicans take control of the senate, he is now expected to become the chairman of the senate environment and public works committee.

However, advocates believe Obama will work to protect his signature power plant rules from Republican attacks, and to live up to his earlier commitments to a global deal on fight climate change.

“We think he sees this as a critically important part of his second term legacy and there is no reason why he should not continue to go forward on this… both domestically and around the world,” Gene Karpinski, president of the League of Conservation Voters, told a press briefing.

The campaigners were less clear, however, how far Obama would be willing to fight to block the Keystone XL pipeline project.

Obama will get a chance to show he is still committed to fighting climate change during a trip to Beijing next week, where the US and Chinese are expected to announce new energy co-operation.

Extracting a pledge from China to cut emissions is hugely important now for Obama, who faces growing pressure from Republicans to demonstrate that other countries beyond the US – especially the high-emissions, rising economies – are acting on climate change.

“It is a domestic political imperative for the president to gain emissions reductions from China and other major emitters as much as it is an international policy goal,” said Paul Bledsoe, a climate change official in the Clinton White House.

“The president is under increasing pressure to gain emissions reductions from China and other major emitters in order to justify US domestic mitigation policy. That is going to be the spin Republicans put on it – that we are wasting our time with domestic emissions reductions because they will be swamped by developing countries’ pollution.”

Obama is going to feel that pressure the most from Congress. With his opponents now in control of both houses, the top slot on the Senate’s environment and public works committee passes from a climate defender, the California Democrat, Barbara Boxer, to Inhofe.

He published a book in 2012 calling global warming a hoax, and has compared the Environmental Protection Agency (EPA) to the Gestapo.

A spokeswoman for Inhofe said his first concern was passing the defence budget, and that he would make no comment on his leadership roles until next week.

But if, as expected, Inhofe becomes the new committee chair next January, he will probably try to dismantle the EPA rules cutting greenhouse gas emissions from power plants – the centrepiece of Obama’s environmental agenda.

Industry lobbyists and campaigners said Inhofe lacked the votes to throw out the power plant rules entirely.

Obama would also veto any such move, said Scott Segal, an energy and coal lobbyist with Bracewell & Giuliani.

“I’m not sure we have the votes to advance those across the finish line particularly if they are vetoed,” Segal told a conference call with reporters. Instead, he said he expected “tailored changes”, which could weaken the rules.

Bledsoe did expect, however, that Obama will sign off on the controversial Keystone XL project early next year.

Republicans have said approving the pipeline, built to pump tar sands crude to Texas Gulf Coast refineries, would be an early order of business.

Obama in his post-election press conference gave no indication what he would decide. But Bledsoe said: “I actually believe the president is likely to approve the piepline and in the process deny Republicans a politically potent issue.”

From his perch in the Senate, Inhofe is expected to launch multiple investigations into the EPA – including Republican charges that the agency leaned heavily on a campaign group in drafting the proposed new rules.

But as committee chair, Inhofe is unlikely to indulge in quite the same level of theatrics on climate denial, said RL Miller, a California lawyer and founder of the grassroots organising group, Climate Hawks Vote.

“I expect we are going to see less headline-grabbing efforts on the EPA and more of simply throttling their budget,” Miller said. “If he touches climate denial at all he is going to be ridiculed in public and in the media. If he is smart, he is going to be very quiet publicly, and it will be death by a thousand cuts in the kind of budget battles that people like Jon Stewart don’t pay attention to.”

Despite their upbeat postures, Tuesday’s results were a big setback for campaign groups which had invested an unprecedented amount in trying to elect pro-climate candidates to Congress.

The former hedge fund billionaire, Tom Steyer, spent nearly $75m on advertising and organising in only seven races, making him the biggest known single spender in these elections. Only three of his candidates won.

“There is no way to dance around the issue that in too many races we lost good allies,” Michael Brune, the director of the Sierra Club, told a briefing. “We see those people being replaced by people that are against our values.”

But the environmental leaders blamed the poor showing on low turnout in an off election year – and continued to insist that climate change was becoming a top-tier issue.

They insisted their effort had put climate change on the electoral map – a big shift from 2012 when virtually no candidates would even utter the words climate change.

This time around, Republican candidates were forced to back away from outright climate denial, the campaigners said.

They noted Cory Gardner, the newly elected Republican Senator from Colorado, had appeared in campaign ads with wind turbines, after earlier disparaging climate science. “Climate denial is an endangered species,” Brune said.