Arquivo da tag: Tecnofetichismo

The new astrology (Aeon)

By fetishising mathematical models, economists turned economics into a highly paid pseudoscience

04 April, 2016

Alan Jay Levinovitz is an assistant professor of philosophy and religion at James Madison University in Virginia. His most recent book is The Gluten Lie: And Other Myths About What You Eat (2015).Edited by Sam Haselby

 

What would make economics a better discipline?

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

Transgênicos e hidrelétricas (Estadão); e resposta (JC)

Transgênicos e hidrelétricas

Recentemente cem cientistas que receberam o Prêmio Nobel em várias áreas do conhecimento assinaram um apelo à organização ambiental Greenpeace para que abandone sua campanha, que já dura muitos anos, contra a utilização de culturas transgênicas para a produção de alimentos. Transgênicos são produtos em que são feitas alterações do código genético que lhes dão características especiais, como as de protegê-los de pragas, resistir melhor a períodos de seca, aumentar a produtividade e outros.

José Goldemberg*

15 Agosto 2016 | 05h00

O sucesso do uso de transgênicos é evidente em muitas culturas, como na produção de soja, da qual o Brasil é um exemplo. Contudo, quando se começou a usar produtos transgênicos, objeções foram levantadas, uma vez que as modificações genéticas poderiam ter consequências imprevisíveis. O Greenpeace tornou-se o campeão das campanhas contra o seu uso, que foi banido em vários países.

As objeções iniciais tinham como base dois tipos de consideração: luma, de caráter científico, que foi seriamente investigada por cientistas; el outra, de caráter mais geral, com base no “princípio da precaução”, que nos diz basicamente que cabe ao proponente de um novo produto demonstrar que ele não tem consequências inconvenientes ou perigosas. O “princípio da precaução” tem sido usado para barrar, com maior ou menor sucesso, a introdução de inovações.

Esse princípio tem um forte componente moral e político e tem sido invocado de forma muito variável ao longo do tempo. Por exemplo, ele não foi invocado quando a energia nuclear começou a ser usada, há cerca de 60 anos, para a produção de eletricidade; como resultado, centenas de reatores nucleares foram instalados em muitos países e alguns deles causaram acidentes de grandes proporções. Já no caso de mudanças climáticas que se originaram na ação do homem – consumo de combustíveis fósseis e lançamento na atmosfera dos gases que aquecem o planeta –, ele foi incorporado na Convenção do Clima em 1992 e está levando os países a reduzir o uso desses combustíveis.

A manifestação dos nobelistas argumenta que a experiência mostrou que as preocupações com possíveis consequências negativas dos transgênicos não se justificam e opor-se a eles não faz mais sentido.

Nuns poucos países, o “princípio da precaução” tem sido invocado também para dificultar a instalação de usinas hidrelétricas, tendo em vista que sua construção afeta populações ribeirinhas e tem impactos ambientais. Esse é um problema de fato sério em países com elevada densidade populacional, como a Índia, cujo território é cerca de três vezes menor que o do Brasil e a população, quatro vezes maior. Qualquer usina hidrelétrica na Índia afeta centenas de milhares de pessoas. Não é o caso do Brasil, que tem boa parte de seu território na Amazônia, onde a população é pequena. Ainda assim, a construção de usinas na Amazônia para abastecer as regiões mais populosas e grandes centros industriais no Sudeste tem enfrentado sérias objeções de grupos de ativistas.

A construção de usinas hidrelétricas no passado foi planejada com reservatórios. Quando esses reservatórios não são feitos, a produção de eletricidade varia ao longo do ano. Para evitar isso são construídos lagos artificiais, que armazenam água para os períodos do ano em que chove pouco.

Até recentemente quase toda a eletricidade usada no Brasil era produzida por hidrelétricas com reservatórios, que garantiam o fornecimento durante o ano todo mesmo chovendo pouco. Desde 1990 essa prática foi abandonada por causa das queixas das populações atingidas nas áreas alagadas. As hidrelétricas passaram a ser construídas sem reservatórios – isto é, “a fio d’água” –, usando apenas a água corrente dos rios. É o caso das usinas de Jirau, Santo Antônio e Belo Monte, cujo custo aumentou muito em relação à eletricidade produzida: elas são dimensionadas para o fluxo máximo de águas dos rios, que se dá em alguns meses, e geram muito menos nos meses secos.

Houve nesses casos um superdimensionamento do problema. De modo geral, para cada pessoa afetada pela construção de usinas, mais de cem pessoas são beneficiadas pela eletricidade produzida. Sucede que os poucos milhares de pessoas atingidas vivem em torno da usina e se organizaram para reclamar compensações (em alguns casos são instrumentadas por grupos políticos), ao passo que os beneficiados, que são milhões, vivem longe do local e não são organizados.

Cabe ao poder público avaliar os interesses do total da população, comparar os riscos e prejuízos sofridos por alguns e os benefícios recebidos por muitos. Isso não tem sido feito e o governo federal não tem tido a firmeza de explicar à sociedade onde estão os interesses gerais da Nação.

Isso se verifica também em outras grandes obras públicas, como estradas, portos e infraestruturas em geral. Um exemplo é o Rodoanel Mário Covas, em torno da cidade de São Paulo, cuja construção enfrentou fortes contestações tanto de atingidos pelas obras como de alguns grupos ambientalistas. A firmeza do governo de São Paulo e os esclarecimentos prestados viabilizaram a obra, hoje considerada positiva pela grande maioria: retira dezenas de milhares de caminhões por dia do tráfego urbano de São Paulo e reduz a poluição lançada por eles sobre a população.

O que se aprende neste caso deveria ser aplicado às hidrelétricas da Amazônia, que têm sido contestadas por alguns grupos de ambientalistas não suficientemente informados. Cabe aqui uma ação como a que foi tomada pelos nobelistas em relação aos transgênicos e aceitar hidrelétricas construídas com as melhores exigências técnicas e ambientais, incluindo reservatórios, sem os quais elas se tornam pouco viáveis, abrindo caminho para o uso de outras fontes de energia mais poluentes, como carvão e derivados de petróleo.

*PRESIDENTE DA FAPESP, FOI PRESIDENTE DA CESP


Pesquisador comenta artigo

JC 5485, 19 de agosto de 2016

O professor emérito da UnB, Nagib Nassar, questiona o artigo “Transgênicos e hidrelétricas”, do Estado de S. Paulo, divulgado no Jornal da Ciência na última terça-feira

Leia o comentário abaixo:

Refiro-me ao artigo do professor José Goldemberg, publicado no Estadão e projetado pelo Jornal da Ciência.

Discordo do ilustre cientista a começar por ele dizer que transgênicos são feitos para proteger plantas de pragas. Sabe-se que o único transgênico plantado para essa finalidade no Brasil é o milho Bt. Assim, o professor esqueceu ou fez esquecer que, para essa finalidade, é introduzido na planta um gene produtor de toxina mata insetos e, consequentemente, a planta passa a funcionar como um inseticida! 

A toxina Bt, assim como mata insetos, intoxica o próprio ser humano. Frequentemente é citado na literatura o alto risco, inclusive fatal, para o indivíduo. Um exemplo dessas variedades de milho Bt é a variedade milho MO 810: proibida para uso humano pelo próprio país produtor, pela França, Alemanha, Inglaterra e outros países europeus. Infelizmente, a variedade é autorizada no Brasil e quem autorizou não se preocupou em nos fazer de simples cobaias! Em países pobres da África foi rejeitado até como presente. A Zâmbia preferiu ver seu povo sofrer de fome a morrer envenenado! Além de matar insetos invasores, a toxina Bt mata insetos úteis, como abelha de mel e outros polinizadores necessários para que a planta formar frutas.

Quando esse tipo de transgênico morre, ao final de estação de crescimento, suas raízes deixam para o solo resíduos tóxicos que matam bactérias fixadoras do nitrogênio e transformam o solo em um ambiente envenenado para o crescimento da bactéria fixadora do Azoto, que forma fertilizante. Assim, fica impedido o crescimento de qualquer cultura leguminosa. O fabricante desse transgênico gasta milhões de reais com todos os tipos de propagandas, em todas as formas e todos os níveis: o resultado é o mais alto nível o custo das sementes transgênicas, que chega a ser 130 vezes mais cara do que o preço normal. Os pequenos agricultores enganados e iludidos pela propaganda, quando não podem pagar dívidas, correm para um destino trágico: o suicídio. Há muitos casos conhecidos da Índia, que chegou a registrar, em apenas um ano, 180 mortos.

É bom um físico falar sobre hidrelétricas, mas é questionável que se afirme dogmaticamente sobre transgênicos. E por que ele escolheu transgênicos para associá-los às hidrelétricas? Será como uma fachada que esconde o mal dos transgênicos? Isto me lembra do manifesto assinado por cem ganhadores de Nobel em favor de transgênicos escondendo atrás o arroz dourado. Entre esses ganhadores de Nobel, físicos, químicos, até letras e, além de tudo, três mortos!

Lembro-me também de um cientista distante da área  que foi há dez anos à Câmara de Deputados com argumentos e pedidos para a liberação da soja transgênica, e não pelos resultados científicos, que nunca foram apresentados e nem existiam, mas para não prejudicar agricultores que contrabandeavam soja.

Nagib Nassar

Professor emérito da Universidade de Brasília

Presidente fundador da fundação  FUNAGIB (www.funagib.geneconserve.pro.br)

‘Estudos de neurociência superaram a psicanálise’, diz pesquisador brasileiro (Folha de S.Paulo)

Juliana Cunha, 18.06.2016

Com 60 anos de carreira, 22.794 citações em periódicos, 60 premiações e 710 artigos publicados, Ivan Izquierdo, 78, é o neurocientista mais citado e um dos mais respeitados da América Latina. Nascido na Argentina, ele mora no Brasil há 40 anos e foi naturalizado brasileiro em 1981. Hoje coordena o Centro de Memória do Instituto do Cérebro da PUC-RS.

Suas pesquisas ajudaram a entender os diferentes tipos de memória e a desmistificar a ideia de que áreas específicas do cérebro se dedicariam de maneira exclusiva a um tipo de atividade.

Ele falou à Folha durante o Congresso Mundial do Cérebro, Comportamento e Emoções, que aconteceu esta semana, em Buenos Aires. Izquierdo foi o homenageado desta edição do congresso.

Na entrevista, o cientista fala sobre a utilidade de memórias traumáticas, sua descrença em métodos que prometem apagar lembranças e diz que a psicanálise foi superada pelos estudos de neurociência e funciona hoje como mero exercício estético.

Bruno Todeschini
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires

*

Folha – É possível apagar memórias?
Ivan Izquierdo – É possível evitar que uma memória se expresse, isso sim. É normal, é humano, inclusive, evitar a expressão de certas lembranças. A falta de uso de uma determinada memória implica em desuso daquela sinapse, que aos poucos se atrofia.

Fora disso, não dá. Não existe uma técnica para escolher lembranças e então apagá-las, até porque a mesma informação é salva várias vezes no cérebro, por um mecanismo que chamamos de plasticidade. Quando se fala em apagamento de memórias é pirotecnia, são coisas midiáticas e cinematográficas.

O senhor trabalha bastante com memória do medo. Não apagá-las é uma pena ou algo a ser comemorado?
A memória do medo é o que nos mantém vivos. É a que pode ser acessada mais rapidamente e é a mais útil. Toda vez que você passa por uma situação de ameaça, a informação fundamental que o cérebro precisa guardar é que aquilo é perigoso. As pessoas querem apagar memórias de medo porque muitas vezes são desconfortáveis, mas, se não estivessem ali, nos colocaríamos em situações ruins.

Claro que esse processo causa enorme estresse. Para me locomover numa cidade, meu cérebro aciona inúmeras memórias de medo. Entre tê-las e não tê-las, prefiro tê-las, foram elas que me trouxeram até aqui, mas se pudermos reduzir nossa exposição a riscos, melhor. O problema muitas vezes é o estímulo, não a resposta do medo.

Mas algumas memórias de medo são paralisantes, e podem ser mais arriscadas do que a situação que evitam. Como lidar com elas?
Antes parado do que morto. O cérebro atua para nos preservar, essa é a prioridade. Claro que esse mecanismo é sujeito a falhas. Se entendemos que a resposta a uma memória de medo é exagerada, podemos tentar fazer com que o cérebro ressignifique um estímulo. É possível, por exemplo, expor o paciente repetidas vezes aos estímulos que criaram aquela memória, mas sem o trauma. Isso dissocia a experiência do medo.

Isso não seria parecido com o que Freud tentava fazer com as fobias?
Sim, Freud foi um dos primeiros a usar a extinção no tratamento de fobias, embora ele não acreditasse exatamente em extinção. Com a extinção, a memória continua, não é apagada, mas o trauma não está mais lá.

Mas muitos neurocientistas consideram Freud datado.
Toda teoria envelhece. Freud é uma grande referência, deu contribuições importantes. Mas a psicanálise foi superada pelos estudos em neurociência, é coisa de quando não tínhamos condições de fazer testes, ver o que acontecia no cérebro. Hoje a pessoa vai me falar em inconsciente? Onde fica? Sou cientista, não posso acreditar em algo só porque é interessante.

Para mim, a psicanálise hoje é um exercício estético, não um tratamento de saúde. Se a pessoa gosta, tudo bem, não faz mal, mas é uma pena quando alguém que tem um problema real que poderia ser tratado deixa de buscar um tratamento médico achando que psicanálise seria uma alternativa.

E outros tipos de análise que não a freudiana?
Terapia cognitiva, seguramente. Há formas de fazer o sujeito mudar sua resposta a um estímulo.

O senhor veio para o Brasil com a ditadura na Argentina. Agora, vivemos um processo no Brasil que alguns chamam de golpe, é uma memória em disputa. O que o senhor acha disso enquanto cientista?
Eu vim por conta de uma ameaça. Não considero um golpe, mas é um processo muito esperto. Mudar uma palavra ressignifica toda uma memória. Há de fato uma disputa de como essa memória coletiva vai ser construída. A esquerda usa o termo golpe para evocar memórias de medo de um país que já passou por um golpe. Conforme essa palavra é repetida, isso cria um efeito poderoso. Ainda não sabemos como essa memória será consolidada, mas a estratégia é muito esperta.

A jornalista JULIANA CUNHA viajou a convite do Congresso Mundial do Cérebro, Comportamento e Emoções

Curtailing global warming with bioengineering? Iron fertilization won’t work in much of Pacific (Science Daily)

Earth’s own experiments during ice ages showed little effect

Date:
May 16, 2016
Source:
The Earth Institute at Columbia University
Summary:
Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

With the right mix of nutrients, phytoplankton grow quickly, creating blooms visible from space. This image, created from MODIS data, shows a phytoplankton bloom off New Zealand. Credit: Robert Simmon and Jesse Allen/NASA

Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

The results are important today, because as groups search for ways to combat climate change, some are exploring fertilizing the oceans with iron as a solution.

Algae absorb carbon dioxide (CO2), a greenhouse gas that contributes to global warming. Proponents of iron fertilization argue that adding iron to the oceans would fuel the growth of algae, which would absorb more CO2 and sink it to the ocean floor. The most promising ocean regions are those high in nutrients but low in chlorophyll, a sign that algae aren’t as productive as they could be. The Southern Ocean, the North Pacific, and the equatorial Pacific all fit that description. What’s missing, proponents say, is enough iron.

The new study, published this week in the Proceedings of the National Academy of Sciences, adds to growing evidence, however, that iron fertilization might not work in the equatorial Pacific as suggested.

Essentially, earth has already run its own large-scale iron fertilization experiments. During the ice ages, nearly three times more airborne iron blew into the equatorial Pacific than during non-glacial periods, but the new study shows that that increase didn’t affect biological productivity. At some points, as levels of iron-bearing dust increased, productivity actually decreased.

What matters instead in the equatorial Pacific is how iron and other nutrients are stirred up from below by upwelling fueled by ocean circulation, said lead author Gisela Winckler, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. The study found seven to 100 times more iron was supplied from the equatorial undercurrent than from airborne dust at sites spread across the equatorial Pacific. The authors write that although all of the nutrients might not be used immediately, they are used up over time, so the biological pump is already operating at full efficiency.

“Capturing carbon dioxide is what it’s all about: does iron raining in with airborne dust drive the capture of atmospheric CO2? We found that it doesn’t, at least not in the equatorial Pacific,” Winckler said.

The new findings don’t rule out iron fertilization elsewhere. Winckler and coauthor Robert Anderson of Lamont-Doherty Earth Observatory are involved in ongoing research that is exploring the effects of iron from dust on the Southern Ocean, where airborne dust supplies a larger share of the iron reaching the surface.

The PNAS paper follows another paper Winckler and Anderson coauthored earlier this year in Nature with Lamont graduate student Kassandra Costa looking at the biological response to iron in the equatorial Pacific during just the last glacial maximum, some 20,000 years ago. The new paper expands that study from a snapshot in time to a time series across the past 500,000 years. It confirms that Costa’s finding, that iron fertilization had no effect then, fit a pattern that extends across the past five glacial periods.

To gauge how productive the algae were, the scientists in the PNAS paper used deep- sea sediment cores from three locations in the equatorial Pacific that captured 500,000 years of ocean history. They tested along those cores for barium, a measure of how much organic matter is exported to the sea floor at each point in time, and for opal, a silicate mineral that comes from diatoms. Measures of thorium-232 reflected the amount of dust that blew in from land at each point in time.

“Neither natural variability of iron sources in the past nor purposeful addition of iron to equatorial Pacific surface water today, proposed as a mechanism for mitigating the anthropogenic increase in atmospheric CO2 inventory, would have a significant impact,” the authors concluded.

Past experiments with iron fertilization have had mixed results. The European Iron Fertilization Experiment (EIFEX) in 2004, for example, added iron in the Southern Ocean and was able to produce a burst of diatoms, which captured CO2 in their organic tissue and sank to the ocean floor. However, the German-Indian LOHAFEX project in 2009 experimented in a nearby location in the South Atlantic and found few diatoms. Instead, most of its algae were eaten up by tiny marine creatures, passing CO2 into the food chain rather than sinking it. In the LOHAFEX case, the scientists determined that another nutrient that diatoms need — silicic acid — was lacking.

The Intergovernmental Panel on Climate Change (IPCC) cautiously discusses iron fertilization in its latest report on climate change mitigation. It warns of potential risks, including the impact that higher productivity in one area may have on nutrients needed by marine life downstream, and the potential for expanding low-oxygen zones, increasing acidification of the deep ocean, and increasing nitrous oxide, a greenhouse gas more potent than CO2.

“While it is well recognized that atmospheric dust plays a significant role in the climate system by changing planetary albedo, the study by Winckler et al. convincingly shows that dust and its associated iron content is not a key player in regulating the oceanic sequestration of CO2 in the equatorial Pacific on large spatial and temporal scales,” said Stephanie Kienast, a marine geologist and paleoceanographer at Dalhousie University who was not involved in the study. “The classic paradigm of ocean fertilization by iron during dustier glacials can thus be rejected for the equatorial Pacific, similar to the Northwest Pacific.”


Journal Reference:

  1. Gisela Winckler, Robert F. Anderson, Samuel L. Jaccard, and Franco Marcantonio. Ocean dynamics, not dust, have controlled equatorial Pacific productivity over the past 500,000 yearsPNAS, May 16, 2016 DOI: 10.1073/pnas.1600616113

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

If The UAE Builds A Mountain Will It Actually Bring More Rain? (Vocativ)

You’re not the only one who thinks constructing a rain-inducing mountain in the desert is a bonkers idea

May 03, 2016 at 6:22 PM ET

Photo Illustration: R. A. Di ISO

The United Arab Emirates wants to build a mountain so the nation can control the weather—but some experts are skeptical about the effectiveness of this project, which may sound more like a James Bond villain’s diabolical plan than a solution to drought.

The actual construction of a mountain isn’t beyond the engineering prowess of the UAE. The small country on the Arabian Peninsula has pulled off grandiose environmental projects before, like the artificial Palm Islands off the coast of Dubai and an indoor ski hill in the Mall of the Emirates. But the scientific purpose of the mountain is questionable.

The UAE’s National Center for Meteorology and Seismology (NCMS) is currently collaborating with the U.S.-based University Corporation for Atmospheric Research (UCAR) for the first planning phase of the ambitious project, according to Arabian Business. The UAE government gave the two groups $400,000 in funding to determine whether they can bring more rain to the region by constructing a mountain that will foster better cloud-seeding.

Last week the NCMS revealed that the UAE spent $588,000 on cloud-seeding in 2015. Throughout the year, 186 flights dispersed potassium chloride, sodium chloride and magnesium into clouds—a process that can trigger precipitation. Now, the UAE is hoping they can enhance the chemical process by forcing air up around the artificial mountain, creating clouds that can be seeded more easily and efficiently.

“What we are looking at is basically evaluating the effects on weather through the type of mountain, how high it should be and how the slopes should be,” NCAR lead researcher Roelof Bruintjes told Arabian Business. “We will have a report of the first phase this summer as an initial step.”

But some scientists don’t expect NCAR’s research will lead to a rain-inducing alp. “I really doubt that it would work,” Raymond Pierrehumbert, a professor of physics at the University of Oxford told Vocativ. “You’d need to build a long ridge, not just a cone, otherwise the air would just go around. Even if you could do that, mountains cause local enhanced rain on the upslope side, but not much persistent cloud downwind, and if you need cloud seeding to get even the upslope rain, it’s really unlikely to work as there is very little evidence that cloud seeding produces much rainfall.”

Pierrehumbert, who specializes in geophysics and climate change, believes the regional environment would make the project especially difficult. “UAE is a desert because of the wind patterns arising from global atmospheric circulations, and any mountain they build is not going to alter those,” he said. 

Pierrehumbert concedes that NCAR is a respectable organization that will be able to use the “small amount of money to research the problem.” He thinks some good scientific study will come of the effort—perhaps helping to determine why a hot, humid area bordered by the ocean receives so little rainfall.

But he believes the minimal sum should go into another project: “They’d be way better off putting the money into solar-powered desalination plants.”

If the project doesn’t work out, at least wealthy Emirates have a 125,000-square-foot indoor snow park to look forward to in 2018.

God of Thunder (NPR)

October 17, 201411:09 AM ET

In 1904, Charles Hatfield claimed he could turn around the Southern California drought. Little did he know, he was going to get much, much more water than he bargained for.

GLYNN WASHINGTON, HOST:

From PRX and NPR, welcome back to SNAP JUDGMENT the Presto episode. Today we’re calling on mysterious forces and we’re going to strap on the SNAP JUDGMENT time machine. Our own Eliza Smith takes the controls and spins the dial back 100 years into the past.

ELIZA SMITH, BYLINE: California, 1904. In the fields, oranges dry in their rinds. In the ‘burbs, lawns yellow. Poppies wilt on the hillsides. Meanwhile, Charles Hatfield sits at a desk in his father’s Los Angeles sewing machine business. His dad wants him to take over someday, but Charlie doesn’t want to spend the rest of his life knocking on doors and convincing housewives to buy his bobbins and thread. Charlie doesn’t look like the kind of guy who changes the world. He’s impossibly thin with a vanishing patch of mousy hair. He always wears the same drab tweed suit. But he thinks to himself just maybe he can quench the Southland’s thirst. So when he punches out his timecard, he doesn’t go home for dinner. Instead, he sneaks off to the Los Angeles Public Library and pores over stacks of books. He reads about shamans who believed that fumes from a pyre of herbs and alcohols could force rain from the sky. He reads modern texts too, about the pseudoscience of pluvo culture – rainmaking, the theory that explosives and pyrotechnics could crack the clouds. Charlie conducts his first weather experiment on his family ranch, just northeast of Los Angeles in the city of Pasadena. One night he pulls his youngest brother, Paul, out of bed to keep watch with a shotgun as he climbs atop a windmill, pours a cocktail of chemicals into a shallow pan and then waits.

He doesn’t have a burner or a fan or some hybrid, no – he just waits for the chemicals to evaporate into the clouds. Paul slumped into a slumber long ago and is now leaning against the foundation of the windmill, when the first droplet hits Charlie’s cheek. Then another. And another.

Charlie pulls out his rain gauge and measures .65 inches. It’s enough to convince him he can make rain.

That’s right, Charlie has the power. Word spreads in local papers and one by one, small towns Hemet, Volta, Gustine, Newman, Crows Landing, Patterson come to him begging for rain. And wherever Charlie goes, rain seems to follow. After he gives their town seven more inches of water than his contract stipulated, the Hemet News raves, Mr. Hatfield is proving beyond doubt that rain can be produced.

Within weeks he’s signing contracts with towns from the Pacific Coast to the Mississippi. Of course, there are doubters who claim that he tracks the weather, who claim he’s a fool chasing his luck.

But then Charlie gets an invitation to prove himself. San Diego, a major city, is starting to talk water rations and they call on him. Of course, most of the city councilmen are dubious of Charlie’s charlatan claims. But still, cows are keeling over in their pastures and farmers are worrying over dying crops. It won’t hurt to hire him. They reason if Charlie Hatfield can fill San Diego’s biggest reservoir, Morena Dam, with 10 billion gallons of water, he’ll earn himself $10,000. If he can’t, well then he’ll just walk away and the city will laugh the whole thing off.

One councilman jokes…

UNIDENTIFIED MAN #1: It’s heads – the city wins. Tails – Hatfield loses.

SMITH: Charlie and Paul set up camp in the remote hills surrounding the Morena Reservoir. This time they work for weeks building several towers. This is to be Charlie’s biggest rain yet. When visitors come to observe his experiments, Charlie turns his back to them, hiding his notebooks and chemicals and Paul fingers the trigger on his trusty rifle. And soon enough it’s pouring. Winds reach record speeds of over 60 miles per hour. But that isn’t good enough – Charlie needs the legitimacy a satisfied San Diego can grant him. And so he works non-stop dodging lightning bolts, relishing thunderclaps. He doesn’t care that he’s soaked to the bone – he can wield weather. The water downs power lines, floods streets, rips up rail tracks.

A Mission Valley man who had to be rescued by a row boat as he clung to a scrap of lumber wraps himself in a towel and shivers as he suggests…

UNIDENTIFIED MAN #2: Let’s pay Hatfield $100,000 to quit.

SMITH: But Charlie isn’t quitting. The rain comes down harder and harder. Dams and reservoirs across the county explode and the flood devastates every farm, every house in its wake. One winemaker is surfacing from the protection of his cellar when he spies a wave twice the height of a telephone pole tearing down his street. He grabs his wife and they run as fast as they can, only to turn and watch their house washed downstream.

And yet, Charlie smiles as he surveys his success. The Morena Reservoir is full. He grabs Paul and the two leave their camp to march the 50 odd miles to City Hall. He expects the indebted populist to kiss his mud-covered shoes. Instead, he’s met with glares and threats. By the time Charlie and Paul reach San Diego’s city center, they’ve stopped answering to the name Hatfield. They call themselves Benson to avoid bodily harm.

Still, when he stands before the city councilman, Charlie declares his operations successful and demands his payment. The men glower at him.

San Diego is in ruins and worst of all – they’ve got blood on their hands. The flood drowned more than 50 people. It also destroyed homes, farms, telephone lines, railroads, streets, highways and bridges. San Diegans file millions of dollars in claims but Charlie doesn’t budge. He folds his arms across his chest, holds his head high and proclaims, the time is coming when drought will overtake this portion of the state. It will be then that you call for my services again.

So the city councilman tells Charlie that if he’s sure he made it rain, they’ll give him his $10,000 – he’ll just have to take full responsibility for the flood. Charlie grits his teeth and tells them, it was coincidence. It rained because Mother Nature made it so. I am no rainmaker.

And then Charlie disappears. He goes on selling sewing machines and keeping quiet.

WASHINGTON: I’ll tell you what, California these days could use a little Charlie Hatfield. Big thanks to Eliza Smith for sharing that story and thanks as well to Leon Morimoto for sound design. Mischief managed – you’ve just gotten to the other side by means of other ways.

If you missed any part of this show, no need for a rampage – head on over to snapjudgment.org. There you’ll find the award-winning podcast – Mark, what award did we win? Movies, pictures, stuff. Amazing stories await. Get in on the conversation. SNAP JUDGMENT’s on Facebook, Twitter @snapjudgment.

Did you ever wind up in the slithering sitting room when you’re supposed to be in Gryffindor’s parlor? Well, me neither, but I’m sure it’s nothing like wandering the halls of the Corporation for Public Broadcasting. Completely different, but many thanks to them. PRX, Public Radio Exchange, hosts a similar annual Quidditch championships but instead of brooms they ride radios. Not quite the same visual effect, but it’s good clean fun all the same – prx.org.

WBEZ in Chicago has tricks up their sleeve and you may have reckoned that this is not the news. No way is this the news. In fact, if you’d just thrown that book with Voldemort trapped in it, thrown it in the fire, been done with the nonsense – and you would still not be as far away from the news as this is. But this is NPR.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

No escaping the Blue Marble (The Conversation)

August 20, 2015 6.46pm EDT

The Earth seen from Apollo, a photo now known as the “Blue Marble”. NASA

It is often said that the first full image of the Earth, “Blue Marble”, taken by the Apollo 17 space mission in December 1972, revealed Earth to be precious, fragile and protected only by a wafer-thin atmospheric layer. It reinforced the imperative for better stewardship of our “only home”.

But there was another way of seeing the Earth revealed by those photographs. For some the image showed the Earth as a total object, a knowable system, and validated the belief that the planet is there to be used for our own ends.

In this way, the “Blue Marble” image was not a break from technological thinking but its affirmation. A few years earlier, reflecting on the spiritual consequences of space flight, the theologian Paul Tillich wrote of how the possibility of looking down at the Earth gives rise to “a kind of estrangement between man and earth” so that the Earth is seen as a totally calculable material body.

For some, by objectifying the planet this way the Apollo 17 photograph legitimised the Earth as a domain of technological manipulation, a domain from which any unknowable and unanalysable element has been banished. It prompts the idea that the Earth as a whole could be subject to regulation.

This metaphysical possibility is today a physical reality in work now being carried out on geoengineering – technologies aimed at deliberate, large-scale intervention in the climate system designed to counter global warming or offset some of its effects.

While some proposed schemes are modest and relatively benign, the more ambitious ones – each now with a substantial scientific-commercial constituency – would see humanity mobilising its technological power to seize control of the climate system. And because the climate system cannot be separated from the rest of the Earth System, that means regulating the planet, probably in perpetuity.

Dreams of escape

Geoengineering is often referred to as Plan B, one we should be ready to deploy because Plan A, cutting global greenhouse gas emissions, seems unlikely to be implemented in time. Others are now working on what might be called Plan C. It was announced last year in The Times:

British scientists and architects are working on plans for a “living spaceship” like an interstellar Noah’s Ark that will launch in 100 years’ time to carry humans away from a dying Earth.

This version of Plan C is known as Project Persephone, which is curious as Persephone in Greek mythology was the queen of the dead. The project’s goal is to build “prototype exovivaria – closed ecosystems inside satellites, to be maintained from Earth telebotically, and democratically governed by a global community.”

NASA and DARPA, the US Defense Department’s advanced technologies agency, are also developing a “worldship” designed to take a multi-generational community of humans beyond the solar system.

Paul Tillich noticed the intoxicating appeal that space travel holds for certain kinds of people. Those first space flights became symbols of a new ideal of human existence, “the image of the man who looks down at the earth, not from heaven, but from a cosmic sphere above the earth”. A more common reaction to Project Persephone is summed up by a reader of the Daily Mail: “Only the ‘elite’ will go. The rest of us will be left to die.”

Perhaps being left to die on the home planet would be a more welcome fate. Imagine being trapped on this “exovivarium”, a self-contained world in which exported nature becomes a tool for human survival; a world where there is no night and day; no seasons; no mountains, streams, oceans or bald eagles; no ice, storms or winds; no sky; no sunrise; a closed world whose occupants would work to keep alive by simulation the archetypal habits of life on Earth.

Into the endless void

What kind of person imagines himself or herself living in such a world? What kind of being, after some decades, would such a post-terrestrial realm create? What kind of children would be bred there?

According to Project Persephone’s sociologist, Steve Fuller: “If the Earth ends up a no-go zone for human beings [sic] due to climate change or nuclear or biological warfare, we have to preserve human civilisation.”

Why would we have to preserve human civilisation? What is the value of a civilisation if not to raise human beings to a higher level of intellectual sophistication and moral responsibility? What is a civilisation worth if it cannot protect the natural conditions that gave birth to it?

Those who blast off leaving behind a ruined Earth would carry into space a fallen civilisation. As the Earth receded into the all-consuming blackness those who looked back on it would be the beings who had shirked their most primordial responsibility, beings corroded by nostalgia and survivor guilt.

He’s now mostly forgotten, but in the 1950s and 1960s the Swedish poet Harry Martinson was famous for his haunting epic poem Aniara, which told the story of a spaceship carrying a community of several thousand humans out into space escaping an Earth devastated by nuclear conflagration. At the end of the epic the spaceship’s controller laments the failure to create a new Eden:

“I had meant to make them an Edenic place,

but since we left the one we had destroyed

our only home became the night of space

where no god heard us in the endless void.”

So from the cruel fantasy of Plan C we are obliged to return to Plan A, and do all we can to slow the geological clock that has ticked over into the Anthropocene. If, on this Earthen beast provoked, a return to the halcyon days of an undisturbed climate is no longer possible, at least we can resolve to calm the agitations of “the wakened giant” and so make this new and unwanted epoch one in which humans can survive.

Geoengineering proposal may backfire: Ocean pipes ‘not cool,’ would end up warming climate (Science Daily)

Date: March 19, 2015

Source: Carnegie Institution

Summary: There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits. One idea involves using ocean pipes to facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters. New research shows that these pipes could actually increase global warming quite drastically.


To combat global climate change caused by greenhouse gases, alternative energy sources and other types of environmental recourse actions are needed. There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits.A new study from a group of Carnegie scientists determines that these types of pipes could actually increase global warming quite drastically. It is published in Environmental Research Letters.

One proposed strategy–called Ocean Thermal Energy Conversion, or OTEC–involves using the temperature difference between deeper and shallower water to power a heat engine and produce clean electricity. A second proposal is to move carbon from the upper ocean down into the deep, where it wouldn’t interact with the atmosphere. Another idea, and the focus of this particular study, proposes that ocean pipes could facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters.

“Our prediction going into the study was that vertical ocean pipes would effectively cool the Earth and remain effective for many centuries,” said Ken Caldeira, one of the three co-authors.

The team, which also included lead author Lester Kwiatkowski as well as Katharine Ricke, configured a model to test this idea and what they found surprised them. The model mimicked the ocean-water movement of ocean pipes if they were applied globally reaching to a depth of about a kilometer (just over half a mile). The model simulated the motion created by an idealized version of ocean pipes, not specific pipes. As such the model does not include real spacing of pipes, nor does it calculate how much energy they would require.

Their simulations showed that while global temperatures could be cooled by ocean pipe systems in the short term, warming would actually start to increase just 50 years after the pipes go into use. Their model showed that vertical movement of ocean water resulted in a decrease of clouds over the ocean and a loss of sea-ice.

Colder air is denser than warm air. Because of this, the air over the ocean surface that has been cooled by water from the depths has a higher atmospheric pressure than the air over land. The cool air over the ocean sinks downward reducing cloud formation over the ocean. Since more of the planet is covered with water than land, this would result in less cloud cover overall, which means that more of the Sun’s rays are absorbed by Earth, rather than being reflected back into space by clouds.

Water mixing caused by ocean pipes would also bring sea ice into contact with warmer waters, resulting in melting. What’s more, this would further decrease the reflection of the Sun’s radiation, which bounces off ice as well as clouds.

After 60 years, the pipes would cause an increase in global temperature of up to 1.2 degrees Celsius (2.2degrees Fahrenheit). Over several centuries, the pipes put the Earth on a warming trend towards a temperature increase of 8.5 degrees Celsius (15.3 degrees Fahrenheit).

“I cannot envisage any scenario in which a large scale global implementation of ocean pipes would be advisable,” Kwiatkowski said. “In fact, our study shows it could exacerbate long-term warming and is therefore highly inadvisable at global scales.”

The authors do say, however, that ocean pipes might be useful on a small scale to help aerate ocean dead zones.


Journal Reference:

  1. Lester Kwiatkowski, Katharine L Ricke and Ken Caldeira. Atmospheric consequences of disruption of the ocean thermoclineEnvironmental Research Letters, 2015 DOI: 10.1088/1748-9326/10/3/034016

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]

BY GEOFF WEBB, NETIQ

12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera

HENRY FOUNTAIN
DO “NEW YORK TIMES”

18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds (Science Daily)

Date: October 29, 2014

Source: Massachusetts Institute of Technology

Summary: In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter. As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space.

A new spin on virtual reality helps engineers read robots’ minds. Credit: Video screenshot courtesy of Melanie Gonick/MIT

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.

Video: http://www.youtube.com/watch?v=utM9zOYXgUY

Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground (Science Daily)

Date: August 28, 2014

Source: European Association of Geochemistry

Summary: Wally Broeker, the first person to alert the world to global warming, has called for atmospheric carbon dioxide to be captured and stored underground.


Wally Broeker, the first person to alert the world to global warming, has called for atmospheric CO2 to be captured and stored underground. He says that carbon capture, combined with limits on fossil fuel emissions, is the best way to avoid global warming getting out of control over the next fifty years. Professor Broeker (Columbia University, New York) made the call during his presentation to the International Carbon Conference in Reykjavik, Iceland, where 150 scientists are meeting to discuss carbon capture and storage.

He was presenting an analysis which showed that the world has been cooling very slowly, over the last 51 million years, but that human activity is causing a rise in temperature which will lead to problems over the next 100,000 years.

“We have painted ourselves into a tight corner. We can’t reduce our reliance of fossil fuels quickly enough, so we need to look at alternatives.

“One of the best ways to deal with this is likely to be carbon capture — in other words, putting the carbon back where it came from, underground. There has been great progress in capturing carbon from industrial processes, but to really make a difference we need to begin to capture atmospheric CO2. Ideally, we could reach a stage where we could control the levels of CO2 in the atmosphere, like you control your central heating. Continually increasing CO2 levels means that we will need to actively manage CO2 levels in the environment, not just stop more being produced. The technology is proven, it just needs to be brought to a stage where it can be implemented.”

Wally Broeker was speaking at the International Carbon Conference in Reykjavik, where 150 scientists are meeting to discuss how best CO2 can be removed from the atmosphere as part of a programme to reduce global warming.

Meeting co-convener Professor Eric Oelkers (University College London and University of Toulouse) commented: “Capture is now at a crossroads; we have proven methods to store carbon in the Earth but are limited in our ability to capture this carbon directly from the atmosphere. We are very good at capturing carbon from factories and power stations, but because roughly two-thirds of our carbon originates from disperse sources, implementing direct air capture is key to solving this global challenge.”

European Association of Geochemistry. “Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground.” ScienceDaily. ScienceDaily, 28 August 2014. <www.sciencedaily.com/releases/2014/08/140828110915.htm>.

Carbon dioxide ‘sponge’ could ease transition to cleaner energy (Science Daily)

Date: August 10, 2014

Source: American Chemical Society (ACS)

Summary: A plastic sponge that sops up the greenhouse gas carbon dioxide might ease our transition away from polluting fossil fuels to new energy sources like hydrogen. A relative of food container plastics could play a role in President Obama’s plan to cut carbon dioxide emissions. The material might also someday be integrated into power plant smokestacks.


Plastic that soaks up carbon dioxide could someday be used in plant smokestacks.
Credit: American Chemical Society

A sponge-like plastic that sops up the greenhouse gas carbon dioxide (CO2) might ease our transition away from polluting fossil fuels and toward new energy sources, such as hydrogen. The material — a relative of the plastics used in food containers — could play a role in President Obama’s plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future.

The report on the material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place here through Thursday.

“The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well. It’s geared toward function in a real-world environment,” says Andrew Cooper, Ph.D. “In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.”

CO2 adsorbents are most commonly used to remove the greenhouse gas pollutant from smokestacks at power plants where fossil fuels like coal or gas are burned. However, Cooper and his team intend the adsorbent, a microporous organic polymer, for a different application — one that could lead to reduced pollution.

The new material would be a part of an emerging technology called an integrated gasification combined cycle (IGCC), which can convert fossil fuels into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated.

Cooper, who is at the University of Liverpool, says that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, he explains, the adsorbent deflates and releases the CO2­, which they can then collect for storage or convert into useful carbon compounds.

The material, which is a brown, sand-like powder, is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action.

One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether made from plastics or metals or in liquid form — do not always hold up so well, he says. Another advantage of the new adsorbent is its ability to adsorb CO2 without also taking on water vapor, which can clog up other materials and make them less effective. Its low cost also makes the sponge polymer attractive. “Compared to many other adsorbents, they’re cheap,” Cooper says, mostly because the carbon molecules used to make them are inexpensive. “And in principle, they’re highly reusable and have long lifetimes because they’re very robust.”

Cooper also will describe ways to adapt his microporous polymer for use in smokestacks and other exhaust streams. He explains that it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO­2 from power plant exhaust, for instance. Combining two types of scrubbers could make much better adsorbents by harnessing the strengths of each, he explains.

The research was funded by the Engineering and Physical Sciences Research Council and E.ON Energy.

Geoengineering the Earth’s climate sends policy debate down a curious rabbit hole (The Guardian)

Many of the world’s major scientific establishments are discussing the concept of modifying the Earth’s climate to offset global warming

Monday 4 August 2014

Many leading scientific institutions are now looking at proposed ways to engineer the planet's climate to offset the impacts of global warming.

Many leading scientific institutions are now looking at proposed ways to engineer the planet’s climate to offset the impacts of global warming. Photograph: NASA/REUTERS

There’s a bit in Alice’s Adventures in Wonderland where things get “curiouser and curiouser” as the heroine tries to reach a garden at the end of a rat-hole sized corridor that she’s just way too big for.

She drinks a potion and eats a cake with no real clue what the consequences might be. She grows to nine feet tall, shrinks to ten inches high and cries literal floods of frustrated tears.

I spent a couple of days at a symposium in Sydney last week that looked at the moral and ethical issues around the concept of geoengineering the Earth’s climate as a “response” to global warming.

No metaphor is ever quite perfect (climate impacts are no ‘wonderland’), but Alice’s curious experiences down the rabbit hole seem to fit the idea of medicating the globe out of a possible catastrophe.

And yes, the fact that in some quarters geoengineering is now on the table shows how the debate over climate change policy is itself becoming “curiouser and curiouser” still.

It’s tempting too to dismiss ideas like pumping sulphate particles into the atmosphere or making clouds whiter as some sort of surrealist science fiction.

But beyond the curiosity lies actions being countenanced and discussed by some of the world’s leading scientific institutions.

What is geoengineering?

Geoengineering – also known as climate engineering or climate modification – comes in as many flavours as might have been on offer at the Mad Hatter’s Tea Party.

Professor Jim Falk, of the Melbourne Sustainable Society Institute at the University of Melbourne, has a list of more than 40 different techniques that have been suggested.

They generally take two approaches.

Carbon Dioxide Reduction (CDR) is pretty self explanatory. Think tree planting, algae farming, increasing the carbon in soils, fertilising the oceans or capturing emissions from power stations. Anything that cuts the amount of CO2 in the atmosphere.

Solar Radiation Management (SRM) techniques are concepts to try and reduce the amount of solar energy reaching the earth. Think pumping sulphate particles into the atmosphere (this mimics major volcanic eruptions that have a cooling effect on the planet), trying to whiten clouds or more benign ideas like painting roofs white.

Geoengineering on the table

In 2008 an Australian Government–backed research group issued a report on the state-of-play of ocean fertilisation, recording there had been 12 experiments carried out of various kinds with limited to zero evidence of “success”.

This priming of the “biological pump” as its known, promotes the growth of organisms (phytoplankton) that store carbon and then sink to the bottom of the ocean.

The report raised the prospect that larger scale experiments could interfere with the oceanic food chain, create oxygen-depleted “dead zones” (no fish folks), impact on corals and plants and various other unknowns.

The Royal Society – the world’s oldest scientific institution – released a report in 2009, also reviewing various geoengineering technologies.

In 2011, Australian scientists gathered at a geoengineering symposium organised by the Australian Academy of Science and the Australian Academy of Technological Sciences and Engineering.

The London Protocol – a maritime convention relating to dumping at sea – was amended last year to try and regulate attempts at “ocean fertilisation” – where substances, usually iron, are dumped into the ocean to artificially raise the uptake of carbon dioxide.

The latest major United Nations Intergovernmental Panel on Climate Change also addressed the geoengineering issue in several chapters of its latest report. The IPCC summarised geoengineering this way.

CDR methods have biogeochemical and technological limitations to their potential on a global scale. There is insufficient knowledge to quantify how much CO2 emissions could be partially offset by CDR on a century timescale. Modelling indicates that SRM methods, if realizable, have the potential to substantially offset a global temperature rise, but they would also modify the global water cycle, and would not reduce ocean acidification. If SRM were terminated for any reason, there is high confidence that global surface temperatures would rise very rapidly to values consistent with the greenhouse gas forcing. CDR and SRM methods carry side effects and long-term consequences on a global scale.

Towards the end of this year, the US National Academy of Sciences will be publishing a major report on the “technical feasibility” of some geoengineering techniques.

Fighting Fire With Fire

The symposium in Sydney was co-hosted by the University of New South Wales and the Sydney Environment Institute at the University of Sydney (for full disclosure here, they paid my travel costs and one night stay).

Dr Matthew Kearnes, one of the organisers of the workshop from UNSW, told me there was “nervousness among many people about even thinking or talking about geoengineering.” He said:

I would not want to dismiss that nervousness, but this is an agenda that’s now out there and it seems to be gathering steam and credibility in some elite establishments.

Internationally geoengineering tends to be framed pretty narrowly as just a case of technical feasibility, cost and efficacy. Could it be done? What would it cost? How quickly would it work?

We wanted to get a way from the arguments about the pros and cons and instead think much more carefully about what this tells us about the climate change debate more generally.

The symposium covered a range of frankly exhausting philosophical, social and political considerations – each of them jumbo-sized cans full of worms ready to open.

Professor Stephen Gardiner, of the University of Washington, Seattle, pushed for the wider community to think about the ethical and moral consequences of geoengineering. He drew a parallel between the way, he said, that current fossil fuel combustion takes benefits now at the expense of impacts on future generations. Geoengineering risked making the same mistake.

Clive Hamilton’s book Earthmasters notes “in practice any realistic assessment of how the world works must conclude that geoengineering research is virtually certain to reduce incentives to pursue emission reductions”.

Odd advocates

Curiouser still, is that some of the world’s think tanks who shout the loudest that human-caused climate change might not even be a thing, or at least a thing not worth worrying about, are happy to countenance geoengineering as a solution to the problem they think is overblown.

For example, in January this year the Copenhagen Consensus Center, a US-based think tank founded by Danish political scientist Bjorn Lomborg, issued a submission to an Australian Senate inquiry looking at overseas aid and development.

Lomborg’s center has for many years argued that cutting greenhouse gas emissions is too expensive and that action on climate change should have a low-priority compared to other issues around the world.

Lomborg himself says human-caused climate change will not turn into an economic negative until near the end of this century.

Yet Lomborg’s submission told the Australian Senate suggested that every dollar spent on “investigat[ing] the feasibility of planetary cooling through geoengineering technologies” could yield “$1000 of benefits” although this, Lomborg wrote, was a “rough estimate”.

But these investigations, Lomborg submitted, “would serve to better understand risks, costs, and benefits, but also act as an important potential insurance against global warming”.

Engineering another excuse

Several academics I’ve spoken with have voiced fears that the idea of unproven and potentially disastrous geoengineering technologies being an option to shield societies from the impacts of climate change could be used to distract policy makers and the public from addressing the core of the climate change issue – that is, curbing emissions in the first place.

But if the idea of some future nation, or group of nations, or even corporations, some embarking on a major project to modify the Earth’s climate systems leaves you feeling like you’ve fallen down a surreal rabbit hole, then perhaps we should also ask ourselves this.

Since the year 1750, the world has added something in the region of 1,339,000,000,000 tonnes of carbon dioxide (that’s 1.34 trillion tonnes) to the atmosphere from fossil fuel and cement production.

Raising the level of CO2 in the atmosphere by 40 per cent could be seen as accidental geoengineering.

Time to crawl out of the rabbit hole?

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

Geoengineering Approaches to Reduce Climate Change Unlikely to Succeed (Science Daily)

Dec. 5, 2013 — Reducing the amount of sunlight reaching the planet’s surface by geoengineering may not undo climate change after all. Two German researchers used a simple energy balance analysis to explain how Earth’s water cycle responds differently to heating by sunlight than it does to warming due to a stronger atmospheric greenhouse effect. Further, they show that this difference implies that reflecting sunlight to reduce temperatures may have unwanted effects on Earth’s rainfall patterns.

Heavy rainfall events can be more common in a warmer world. (Credit: Annett Junginger, distributed via imaggeo.egu.eu)

The results are now published in Earth System Dynamics, an open access journal of the European Geosciences Union (EGU).

Global warming alters Earth’s water cycle since more water evaporates to the air as temperatures increase. Increased evaporation can dry out some regions while, at the same time, result in more rain falling in other areas due to the excess moisture in the atmosphere. The more water evaporates per degree of warming, the stronger the influence of increasing temperature on the water cycle. But the new study shows the water cycle does not react the same way to different types of warming.

Axel Kleidon and Maik Renner of the Max Planck Institute for Biogeochemistry in Jena, Germany, used a simple energy balance model to determine how sensitive the water cycle is to an increase in surface temperature due to a stronger greenhouse effect and to an increase in solar radiation. They predicted the response of the water cycle for the two cases and found that, in the former, evaporation increases by 2% per degree of warming while in the latter this number reaches 3%. This prediction confirmed results of much more complex climate models.

“These different responses to surface heating are easy to explain,” says Kleidon, who uses a pot on the kitchen stove as an analogy. “The temperature in the pot is increased by putting on a lid or by turning up the heat — but these two cases differ by how much energy flows through the pot,” he says. A stronger greenhouse effect puts a thicker ‘lid’ over Earth’s surface but, if there is no additional sunlight (if we don’t turn up the heat on the stove), extra evaporation takes place solely due to the increase in temperature. Turning up the heat by increasing solar radiation, on the other hand, enhances the energy flow through Earth’s surface because of the need to balance the greater energy input with stronger cooling fluxes from the surface. As a result, there is more evaporation and a stronger effect on the water cycle.

In the new Earth System Dynamics study the authors also show how these findings can have profound consequences for geoengineering. Many geoengineering approaches aim to reduce global warming by reducing the amount of sunlight reaching Earth’s surface (or, in the pot analogy, reduce the heat from the stove). But when Kleidon and Renner applied their results to such a geoengineering scenario, they found out that simultaneous changes in the water cycle and the atmosphere cannot be compensated for at the same time. Therefore, reflecting sunlight by geoengineering is unlikely to restore the planet’s original climate.

“It’s like putting a lid on the pot and turning down the heat at the same time,” explains Kleidon. “While in the kitchen you can reduce your energy bill by doing so, in the Earth system this slows down the water cycle with wide-ranging potential consequences,” he says.

Kleidon and Renner’s insight comes from looking at the processes that heat and cool Earth’s surface and how they change when the surface warms. Evaporation from the surface plays a key role, but the researchers also took into account how the evaporated water is transported into the atmosphere. They combined simple energy balance considerations with a physical assumption for the way water vapour is transported, and separated the contributions of surface heating from solar radiation and from increased greenhouse gases in the atmosphere to obtain the two sensitivities. One of the referees for the paper commented: “it is a stunning result that such a simple analysis yields the same results as the climate models.”

Journal Reference:

  1. A. Kleidon, M. Renner. A simple explanation for the sensitivity of the hydrologic cycle to global climate changeEarth System Dynamics Discussions, 2013; 4 (2): 853 DOI: 10.5194/esdd-4-853-2013

Manejo de água no país é crítico, afirmam pesquisadores (Fapesp)

Avaliação foi feita por participantes de seminário sobre recursos hídricos e agricultura, realizado na FAPESP como parte das atividades do Prêmio Fundação Bunge 2013 (Wikipedia)

09/10/2013

Por Elton Alisson

Agência FAPESP – A gestão de recursos hídricos no Brasil representa um problema crítico, devido à falta de mecanismos, tecnologias e, sobretudo, de recursos humanos suficientes para gerir de forma adequada as bacias hidrográficas do país. A avaliação foi feita por pesquisadores participantes do “Seminário sobre Recursos Hídricos e Agricultura”, realizado no dia 2 de outubro, na FAPESP.

O evento integrou as atividades do 58º Prêmio Fundação Bunge e do 34º Prêmio Fundação Bunge Juventude que, neste ano, contemplaram as áreas de Recursos Hídricos e Agricultura e Crítica Literária. Na área de Recursos Hídricos e Agricultura os prêmios foram outorgados, respectivamente, aos professores Klaus Reichardt, do Centro de Energia Nuclear na Agricultura (CENA), da Universidade de São Paulo (USP), e Samuel Beskow, da Universidade Federal de Pelotas (UFPel).

“O Brasil tem problemas de gestão de recursos hídricos porque não há mecanismos, instrumentos, tecnologias e, acima de tudo, recursos humanos suficientemente treinados e com bagagem interdisciplinar para enfrentar e solucionar os problemas de manejo da água”, disse José Galizia Tundisi, pesquisador do Instituto Internacional de Ecologia (IIE), convidado a participar do evento.

“É preciso gerar métodos, conceitos e mecanismos aplicáveis às condições do país”, avaliou o pesquisador, que atualmente dirige o programa mundial de formação de gestores de recursos hídricos da Rede Global de Academias de Ciências (IAP, na sigla em inglês) – instituição que representa mais de cem academias de ciências no mundo.

De acordo com Tundisi, as bacias hidrográficas foram adotadas como unidades prioritárias de gerenciamento do uso da água pela Política Nacional de Recursos Hídricos, sancionada em 1997. Todas as bacias hidrográficas do país, contudo, carecem de instrumentos que possibilitem uma gestão adequada, apontou o pesquisador.

“É muito difícil encontrar um comitê de bacia hidrográfica [colegiado composto por representantes da sociedade civil e responsável pela gestão de recursos hídricos de uma determinada bacia] que esteja totalmente instrumentalizado em termos de técnicas e de programas para melhorar o desempenho do gerenciamento de uso da água”, afirmou.

Modelagem hidrológica

Segundo Tundisi, alguns dos instrumentos que podem facilitar a gestão e a tomada de decisões em relação ao manejo da água de bacias hidrográficas brasileiras são modelos computacionais de simulação do comportamento de bacias hidrográficas, como o desenvolvido por Beskow, professor do Departamento de Engenharia Hídrica da UFPel, ganhador da atual edição do Prêmio Fundação Bunge Juventude na área de Recursos Hídricos e Agricultura.

Batizado de Lavras Simulation of Hidrology (Lash), o modelo hidrológico foi desenvolvido por Beskow durante seu doutorado, realizado na Universidade Federal de Lavras (Ufla), em Minas Gerais, com um período na Purdue University, dos Estados Unidos.

“Há vários modelos hidrológicos desenvolvidos em diferentes partes do mundo – especialmente nos Estados Unidos e Europa –, que são ferramentas valiosíssimas para gestão e tomada de decisões relacionadas a bacias hidrográficas”, disse Beskow.

“Esses modelos hidrológicos são úteis tanto para projetar estruturas hidráulicas – pontes ou reservatórios –, como para fazer previsões em tempo real de cheias e enchentes, como para medir os impactos de ações do tipo desmatamento ou mudanças no uso do solo de áreas no entorno de bacias hidrográficas”, afirmou.

De acordo com o pesquisador, a primeira versão do Lash foi concluída em 2009 e aplicada em pesquisas sobre modelagem de chuva e vazão de água para avaliação do potencial de geração de energia elétrica em bacias hidrográficas de porte pequeno, como a do Ribeirão Jaguará, em Minas Gerais, que possui 32 quilômetros quadrados.

Em razão dos resultados animadores obtidos, o pesquisador começou a desenvolver, a partir de 2011, a segunda versão do modelo de simulação hidrológica, que pretende disponibilizar para os gestores de bacias hidrográficas de diferentes dimensões.

“O modelo conta agora com um banco de dados por meio do qual os usuários conseguem importar e armazenar dados de chuva, temperatura e umidade e uso do solo, entre outros parâmetros, gerados em diferentes estações da rede de monitoramento de uma determinada bacia geográfica e, que permitem realizar a gestão de recursos hídricos”, contou.

Uma das principais motivações para o desenvolvimento de modelos e de simulação hidrológica no Brasil, segundo o pesquisador, é a falta de dados fluviométricos (de medição de níveis de água, velocidade e vazão nos rios) das bacias hidrológicas existentes no país.

É baixo o número de estações fluviométricas cadastradas no Sistema de Informações Hidrológicas (HidroWeb), operado pela Agência Nacional de Águas (ANA), e muitas delas estão fora de operação, afirmou Beskow.

“Existem pouco mais de cem estações fluviométricas no Rio Grande do Sul cadastradas nesse sistema, que nos permitem obter dados de séries temporais de até dez anos”, disse o pesquisador. “Esse número de estações é muito baixo para fazer a gestão de recursos hídricos de um estado como o Rio Grande do Sul.”

Uso racional da água

Beskow e Klaus Reichardt – que também é professor da Escola Superior de Agricultura Luiz de Queiroz (Esalq) – destacaram a necessidade de desenvolver tecnologias para usar a água de maneira cada vez mais racional na agricultura, uma vez que o setor consome a maior parte da água doce prontamente disponível no mundo hoje.

Do total de 70% da água encontrada na Terra, 97,5% é salgada e 2,5% é doce. Desse percentual ínfimo de água doce, no entanto, 69% estão estocados em geleiras e neves eternas, 29,8% em aquíferos e 0,9% em reservatórios. Do 0,3% prontamente disponível, 65% são utilizados pela agricultura, 22% pelas indústrias, 7% para consumo humano e 6% são perdidos, ressaltou Reichardt.

“No Brasil, temos a Amazônia e o aquífero Guarani que poderão ser explorados”, afirmou o pesquisador que teve projetos apoiados pela FAPESP.

Reichardt ganhou o prêmio por sua contribuição em Física de Solos ao estudar e desenvolver formas de calcular o movimento de água em solos arenosos ou argilosos, entre outros, que apresentam variações. “Isso foi aplicado em vários tipos de solo com condutividade hidráulica saturada em função da umidade, por exemplo”, contou.

O pesquisador vem se dedicando nos últimos anos a realizar, em colaboração com colegas da Empresa Brasileira de Pesquisa Agropecuária (Embrapa), tomografia computadorizada para medida de água no solo. “Por meio dessa técnica conseguimos desvendar fenômenos muito interessantes que ocorrem no solo”, disse Reichardt.

Custo da inanição

O evento contou com a presença de Eduardo Moacyr Krieger e Carlos Henrique de Brito Cruz, respectivamente vice-presidente e diretor científico da FAPESP; Jacques Marcovitch, presidente da Fundação Bunge; Ardaillon Simões, presidente da Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (Facepe), e José Antônio Frizzone, professor da Esalq, entre outras autoridades.

Em seu pronunciamento, Krieger apontou que a Fundação Bunge e a FAPESP têm muitas características em comum. “Ao premiar anualmente os melhores pesquisadores em determinadas áreas, a Fundação Bunge revela seu cuidado com o mérito científico e a qualidade das pesquisas”, disse Krieger.

“A FAPESP, de certa forma, também faz isso ao ‘premiar’ os pesquisadores por meio de Bolsas, Auxílios e outras modalidades de apoio, levando em conta a qualidade da pesquisa realizada.”

Brito Cruz ressaltou que o prêmio concedido pela Fundação Bunge ajuda a criar no Brasil a possibilidade de pesquisadores se destacarem na sociedade brasileira por sua capacidade e realizações intelectuais.

“Isso é essencial para se construir um país que seja dono de seu destino, capaz de criar seu futuro e enfrentar novos desafios de qualquer natureza”, disse Brito Cruz. “Um país só consegue avançar tendo pessoas com capacidade intelectual para entender os problemas e criar soluções para resolvê-los.”

Por sua vez, Marcovitch avaliou que o problema da gestão do uso da água no país pode ser enfrentado de duas formas. A primeira parte da premissa de que o país está deitado em berço esplêndido, tem recursos naturais abundantes e, portanto, não precisaria se preocupar com o problema. A segunda alerta para as consequências da inação em relação à necessidade de se fazer gestão adequada dos recursos hídricos do país, como Tundisi vem fazendo, para estimular pesquisadores como Beskow e Reichardt a encontrar respostas.

“[Nós, pesquisadores,] temos a responsabilidade de elevar a consciência da sociedade sobre os riscos e o custo da inação em relação à gestão dos recursos hídricos do país”, disse.

Is War Really Disappearing? New Analysis Suggests Not (Science Daily)

Aug. 29, 2013 — While some researchers have claimed that war between nations is in decline, a new analysis suggests we shouldn’t be too quick to celebrate a more peaceful world.

The study finds that there is no clear trend indicating that nations are less eager to wage war, said Bear Braumoeller, author of the study and associate professor of political science at The Ohio State University.

Conflict does appear to be less common than it had been in the past, he said. But that’s due more to an inability to fight than to an unwillingness to do so.

“As empires fragment, the world has split up into countries that are smaller, weaker and farther apart, so they are less able to fight each other,” Braumoeller said.

“Once you control for their ability to fight each other, the proclivity to go to war hasn’t really changed over the last two centuries.”

Braumoeller presented his research Aug. 29 in Chicago at the annual meeting of the American Political Science Association.

Several researchers have claimed in recent years that war is in decline, most notably Steven Pinker in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.

As evidence, Pinker points to a decline in war deaths per capita. But Braumoeller said he believes that is a flawed measure.

“That accurately reflects the average citizen’s risk from death in war, but countries’ calculations in war are more complicated than that,” he said.

Moreover, since population grows exponentially, it would be hard for war deaths to keep up with the booming number of people in the world.

Because we cannot predict whether wars will be quick and easy or long and drawn-out (“Remember ‘Mission Accomplished?'” Braumoeller says) a better measure of how warlike we as humans are is to start with how often countries use force — such as missile strikes or armed border skirmishes — against other countries, he said.

“Any one of these uses of force could conceivably start a war, so their frequency is a good indication of how war prone we are at any particular time,” he said.

Braumoeller used the Correlates of War Militarized Interstate Dispute database, which scholars from around the world study to measure uses of force up to and including war.

The data shows that the uses of force held more or less constant through World War I, but then increased steadily thereafter.

This trend is consistent with the growth in the number of countries over the course of the last two centuries.

But just looking at the number of conflicts per pair of countries is misleading, he said, because countries won’t go to war if they aren’t “politically relevant” to each other.

Military power and geography play a big role in relevance; it is unlikely that a small, weak country in South America would start a war with a small, weak country in Africa.

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years.

While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war.

“With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn’t get credit for being more peaceful just because we’re not as able fight as we once were,” he said.

“There is no indication that we actually have less proclivity to wage war.”

When Will My Computer Understand Me? (Science Daily)

June 10, 2013 — It’s not hard to tell the difference between the “charge” of a battery and criminal “charges.” But for computers, distinguishing between the various meanings of a word is difficult.

A “charge” can be a criminal charge, an accusation, a battery charge, or a person in your care. Some of those meanings are closer together, others further apart. (Credit: Image courtesy of University of Texas at Austin, Texas Advanced Computing Center)

For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Driven initially by efforts to translate Russian scientific texts during the Cold War (and more recently by the value of information retrieval and data analysis tools), these efforts have met with mixed success. IBM’s Jeopardy-winningWatson system and Google Translate are high profile, successful applications of language technologies, but the humorous answers and mistranslations they sometimes produce are evidence of the continuing difficulty of the problem.

Our ability to easily distinguish between multiple word meanings is rooted in a lifetime of experience. Using the context in which a word is used, an intrinsic understanding of syntax and logic, and a sense of the speaker’s intention, we intuit what another person is telling us.

“In the past, people have tried to hand-code all of this knowledge,” explained Katrin Erk, a professor of linguistics at The University of Texas at Austin focusing on lexical semantics. “I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know.”

Other efforts have tried to use dictionary meanings to train computers to better understand language, but these attempts have also faced obstacles. Dictionaries have their own sense distinctions, which are crystal clear to the dictionary-maker but murky to the dictionary reader. Moreover, no two dictionaries provide the same set of meanings — frustrating, right?

Watching annotators struggle to make sense of conflicting definitions led Erk to try a different tactic. Instead of hard-coding human logic or deciphering dictionaries, why not mine a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a weighted map of relationships — a dictionary without a dictionary?

“An intuition for me was that you could visualize the different meanings of a word as points in space,” she said. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.

“The lower end for this kind of a research is a text collection of 100 million words,” she explained. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers and Hadoop come in.”

Applying Computational Horsepower

Erk initially conducted her research on desktop computers, but around 2009, she began using the parallel computing systems at the Texas Advanced Computing Center (TACC). Access to a special Hadoop-optimized subsystem on TACC’s Longhornsupercomputer allowed Erk and her collaborators to expand the scope of their research. Hadoop is a software architecture well suited to text analysis and the data mining of unstructured data that can also take advantage of large computer clusters. Computational models that take weeks to run on a desktop computer can run in hours on Longhorn. This opened up new possibilities.

“In a simple case we count how often a word occurs in close proximity to other words. If you’re doing this with one billion words, do you have a couple of days to wait to do the computation? It’s no fun,” Erk said. “With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster. That enabled us to use larger amounts of data and develop better models.”

Treating words in a relational, non-fixed way corresponds to emerging psychological notions of how the mind deals with language and concepts in general, according to Erk. Instead of rigid definitions, concepts have “fuzzy boundaries” where the meaning, value and limits of the idea can vary considerably according to the context or conditions. Erk takes this idea of language and recreates a model of it from hundreds of thousands of documents.

Say That Another Way

So how can we describe word meanings without a dictionary? One way is to use paraphrases. A good paraphrase is one that is “close to” the word meaning in that high-dimensional space that Erk described.

“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” Erk explained. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”

Language technology already helps millions of people perform practical and valuable tasks every day via web searches and question-answer systems, but it is poised for even more widespread applications.

Automatic information extraction is an application where Erk’s paraphrasing research may be critical. Say, for instance, you want to extract a list of diseases, their causes, symptoms and cures from millions of pages of medical information on the web.

“Researchers use slightly different formulations when they talk about diseases, so knowing good paraphrases would help,” Erk said.

In a paper to appear in ACM Transactions on Intelligent Systems and Technology, Erk and her collaborators illustrated they could achieve state-of-the-art results with their automatic paraphrasing approach.

Recently, Erk and Ray Mooney, a computer science professor also at The University of Texas at Austin, were awarded a grant from the Defense Advanced Research Projects Agency to combine Erk’s distributional, high dimensional space representation of word meanings with a method of determining the structure of sentences based on Markov logic networks.

“Language is messy,” said Mooney. “There is almost nothing that is true all the time. “When we ask, ‘How similar is this sentence to another sentence?’ our system turns that question into a probabilistic theorem-proving task and that task can be very computationally complex.”

In their paper, “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics (STARSEM2013) in June, Erk, Mooney and colleagues announced their results on a number of challenge problems from the field of artificial intelligence.

In one problem, Longhorn was given a sentence and had to infer whether another sentence was true based on the first. Using an ensemble of different sentence parsers, word meaning models and Markov logic implementations, Mooney and Erk’s system predicted the correct answer with 85% accuracy. This is near the top results in this challenge. They continue to work to improve the system.

There is a common saying in the machine-learning world that goes: “There’s no data like more data.” While more data helps, taking advantage of that data is key.

“We want to get to a point where we don’t have to learn a computer language to communicate with a computer. We’ll just tell it what to do in natural language,” Mooney said. “We’re still a long way from having a computer that can understand language as well as a human being does, but we’ve made definite progress toward that goal.”

Brain Scans Predict Which Criminals Are Most Likely to Reoffend (Wired)

BY GREG MILLER

03.26.13 – 3:40 PM

Photo: Erika Kyte/Getty Images

Brain scans of convicted felons can predict which ones are most likely to get arrested after they get out of prison, scientists have found in a study of 96 male offenders.

“It’s the first time brain scans have been used to predict recidivism,” said neuroscientist Kent Kiehl of the Mind Research Network in Albuquerque, New Mexico, who led the new study. Even so, Kiehl and others caution that the method is nowhere near ready to be used in real-life decisions about sentencing or parole.

Generally speaking, brain scans or other neuromarkers could be useful in the criminal justice system if the benefits in terms of better accuracy outweigh the likely higher costs of the technology compared to conventional pencil-and-paper risk assessments, says Stephen Morse, a legal scholar specializing in criminal law and neuroscience at the University of Pennsylvania. The key questions to ask, Morse says, are: “How much predictive accuracy does the marker add beyond usually less expensive behavioral measures? How subject is it to counter-measures if a subject wishes to ‘defeat’ a scan?”

Those are still open questions with regard to the new method, which Kiehl and colleagues, including postdoctoral fellow Eyal Aharoni, describe in a paper to be published this week in the Proceedings of the National Academy of Sciences.

The test targets impulsivity. In a mobile fMRI scanner the researchers trucked in to two state prisons, they scanned inmates’ brains as they did a simple impulse control task. Inmates were instructed to press a button as quickly as possible whenever they saw the letter X pop up on a screen inside the scanner, but not to press it if they saw the letter K. The task is rigged so that X pops up 84 percent of the time, which predisposes people to hit the button and makes it harder to suppress the impulse to press the button on the rare trials when a K pops up.

Based on previous studies, the researchers focused on the anterior cingulate cortex, one of several brain regions thought to be important for impulse control. Inmates with relatively low activity in the anterior cingulate made more errors on the task, suggesting a correlation with poor impulse control.

They were also more likely to get arrested after they were released. Inmates with relatively low anterior cingulate activity were roughly twice as likely as inmates with high anterior cingulate activity to be rearrested for a felony offense within 4 years of their release, even after controlling for other behavioral and psychological risk factors.

“This is an exciting new finding,” said Essi Viding, a professor of developmental psychopathology at University College London. “Interestingly this brain activity measure appears to be a more robust predictor, in particular of non-violent offending, than psychopathy or drug use scores, which we know to be associated with a risk of reoffending.” However, Viding notes that Kiehl’s team hasn’t yet tried to compare their fMRI test head to head against pencil-and-paper tests specifically designed to assess the risk of recidivism. ”It would be interesting to see how the anterior cingulate cortex activity measure compares against these measures,” she said.

“It’s a great study because it brings neuroimaging into the realm of prediction,” said clinical psychologistDustin Pardini of the University of Pittsburgh. The study’s design is an improvement over previous neuroimaging studies that compared groups of offenders with groups of non-offenders, he says. All the same, he’s skeptical that brain scans could be used to predict the behavior of a given individual. ”In general we’re horrible at predicting human behavior, and I don’t see this as being any different, at least not in the near future.”

Even if the findings hold up in a larger study, there would be limitations, Pardini adds. “In a practical sense, there are just too many ways an offender could get around having an accurate representation of his brain activity taken,” he said. For example, if an offender moves his head while inside the scanner, that would render the scan unreadable. Even more subtle strategies, such as thinking about something unrelated to the task, or making mistakes on purpose, could also thwart the test.

Kiehl isn’t convinced either that this type of fMRI test will ever prove useful for assessing the risk to society posed by individual criminals. But his group is collecting more data — lots more — as part of a much larger study in the New Mexico state prisons. “We’ve scanned 3,000 inmates,” he said. “This is just the first 100.”

Kiehl hopes this work will point to new strategies for reducing criminal behavior. If low activity in the anterior cingulate does in fact turn out to be a reliable predictor of recidivism, perhaps therapies that boost activity in this region would improve impulse control and prevent future crimes, Kiehl says. He admits it’s speculative, but his group is already thinking up experiments to test the idea. ”Cognitive exercises is where we’ll start,” he said. “But I wouldn’t rule out pharmaceuticals.”

Nobel de Química fala sobre a ‘magia da ciência’ em São Carlos (Fapesp)

Na palestra de abertura do simpósio em homenagem ao professor do MIT Daniel Kleppner, Dudley Herschbach, ganhador do prêmio de Química em 1986, apresentou parábolas para ilustrar o que a química é capaz de fazer (foto:Silvio Pires/FAPESP)

28/02/2013

Por Karina Toledo

Agência FAPESP – Com uma palestra intitulada “Glimpses of Chemical Wizardry” (Vislumbres da Magia da Química), o norte-americano Dudley Herschbach – ganhador do prêmio Nobel de Química de 1986 – deu início às atividades de um simpósioque reúne esta semana grandes nomes da ciência mundial em São Carlos, no interior de São Paulo.

A um auditório repleto de estudantes, principalmente dos cursos de Física, Química e Ciências Biológicas da Universidade Federal de São Carlos (UFSCar), Herschbach apresentou três “parábolas moleculares” com o intuito de mostrar algumas das coisas espetaculares que a ciência é capaz de fazer.

Em uma das histórias, intitulada “A vida em turnê no interior das células”, Herschbach falou sobre técnicas avançadas de microscopia com super-resolução desenvolvidas por Xiaowei Zhuang, pesquisadora da Universidade Harvard, que permitem, por exemplo, estudar a interação entre células e a expressão de genes em tempo real.

“A ciência faz coisas que realmente pareciam impossíveis antes de acontecerem. De vez em quando, alguém, em alguma parte do mundo, faz algo mágico e muda as coisas. É maravilhoso saber que você faz parte disso. É parte da recompensa da ciência que você não tem na maioria das profissões”, disse Herschbach à Agência FAPESP.

Graduado em Matemática pela Universidade Stanford, Herschbach fez mestrado em Física e em Química, além de doutorado em Físico-Química pela Universidade Harvard, onde hoje é professor.

“Fui o primeiro da minha família a ir para a universidade. Ofereceram-me uma bolsa para jogar futebol [norte-americano], mas acabei trocando por uma bolsa acadêmica, pois o técnico havia me proibido de frequentar as aulas de laboratório para não me atrasar para os treinos. A verdade é que eu achava a ciência muito mais fascinante”, contou.

Nos anos 1960, o cientista conduziu experimentos pioneiros com a técnica de feixes moleculares cruzados para estudar reações químicas e a dinâmica dos átomos das moléculas em tempo real. Por suas pesquisas nesse campo, recebeu em 1986 – junto com o taiwanês Yuan Lee e o canadense John Polanyi – o Nobel de Química.

Os resultados foram de grande importância para o desenvolvimento de um novo campo de pesquisa — o da dinâmica de reação — e proporcionaram um entendimento detalhado de como as reações químicas acontecem.

“Quando olho no espelho, ao me barbear, percebo que ganhar o Nobel não mudou nada em mim. A única diferença é que as pessoas ficaram mais interessadas no que tenho a dizer. Convidam-me para palestras e entrevistas. E isso acabou me transformando numa espécie de embaixador da ciência”, disse Herschbach.

Poesia em sala de aula

Durante toda a apresentação, Herschbach combateu o mito de que ciência é algo muito difícil, reservado para os muito inteligentes. “Costumo ouvir pessoas dizendo que é preciso ser muito bom em matemática para ser um bom pesquisador, mas a maioria dos cientistas usa a mesma matemática que um caixa de supermercado. Você não precisa ser bom em tudo, apenas em uma coisa, achar um nicho”, afirmou.

Ao comparar a ciência com outras atividades humanas, Herschbach disse que, em nenhuma outra profissão, você pode falhar inúmeras vezes e ainda ser aplaudido quando consegue fazer alguma coisa certa. “Um músico pode tocar quase todas as notas certas em um concerto e ser criticado por ter errado apenas algumas”, comparou.

Herschbach contou que costumava pedir a seus alunos que escrevessem poemas para lhes mostrar que é mais importante se preocupar em fazer as perguntas certas do que encontrar a resposta certa.

“Isso, mais do que resolver equações, é como fazer ciência de verdade. Ninguém diz se um poema está certo ou errado e sim o quanto ele é capaz de abrir seus olhos para algo que parecia ordinário, fazer você enxergar aquilo de outra forma. É assim com a ciência. Se você faz pesquisa de fronteira, coisas novas, é muito artístico. Quero que os estudantes percebam que eles também podem ser feiticeiros”, concluiu.

O Simpósio em Homenagem ao Prof. Daniel Kleppner “Física atômica e áreas correlatas”, que termina no dia 1º de março, é promovido pelo Centro de Pesquisa em Óptica e Fotônica (Cepof) de São Carlos, um dos Centros de Pesquisa, Inovação e Difusão (CEPID) financiados pela FAPESP.

O objetivo do encontro é prestar uma homenagem ao físico norte-americano Daniel Kleppner, do Instituto de Tecnologia de Massachusetts (MIT), que receberá o título de professor honorário do Instituto de Física de São Carlos, da Universidade de São Paulo (IFSC-USP).

Além de Herschbach, amigo de Kleppner desde os tempos da graduação, outros quatro ganhadores do Nobel também participam do evento: Serge Haroche (Nobel de Física 2012), David Wineland (Nobel de Física 2012), Eric Cornell (Nobel de Física 2001) e William Phillips (Nobel de Física 1997).

Will we ever have cyborg brains? (IO9)

Will we ever have cyborg brains?

DEC 19, 2012 2:40 PM

By George Dvorsky

Over at BBC Future, computer scientist Martin Angler has put together a provocative piece about humanity’s collision course with cybernetic technologies. Today, says Angler, we’re using neural interface devices and other assistive technologies to help the disabled. But in short order we’ll be able to radically enhance human capacites — prompting him to wonder about the extent to which we might cyborgize our brains.

Angler points to two a recent and equally remarkable breakthroughs, including a paralyzed stroke victim who was able to guide a robot arm that delivered a hot drink, and a thought-controlled prosthetic hand that could grasp a variety of objects.

Admitting that it’s still early days, Angler speculates about the future:

Yet it’s still a far cry from the visions of man fused with machine, or cyborgs, that grace computer games or sci-fi. The dream is to create the type of brain augmentations we see in fiction that provide cyborgs with advantages or superhuman powers. But the ones being made in the lab only aim to restore lost functionality – whether it’s brain implants that restore limb control, or cochlear implants for hearing.

Creating implants that improve cognitive capabilities, such as an enhanced vision “gadget” that can be taken from a shelf and plugged into our brain, or implants that can restore or enhance brain function is understandably a much tougher task. But some research groups are being to make some inroads.

For instance, neuroscientists Matti Mintz from Tel Aviv University and Paul Verschure from Universitat Pompeu Fabra in Barcelona, Spain, are trying to develop an implantable chip that can restore lost movement through the ability to learn new motor functions, rather than regaining limb control. Verschure’s team has developed a mathematical model that mimics the flow of signals in the cerebellum, the region of the brain that plays an important role in movement control. The researchers programmed this model onto a circuit and connected it with electrodes to a rat’s brain. If they tried to teach the rat a conditioned motor reflex – to blink its eye when it sensed an air puff – while its cerebellum was “switched off” by being anaesthetised, it couldn’t respond. But when the team switched the chip on, this recorded the signal from the air puff, processed it, and sent electrical impulses to the rat’s motor neurons. The rat blinked, and the effect lasted even after it woke up.

Be sure to read the entire article, as Angler discusses uplifted monkeys, the tricky line that divides a human brain from a cybernetic one, and the all-important question of access.

Image: BBC/Science Photo Library.