Arquivo da tag: Mediação tecnológica

From the Concorde to Sci-Fi Climate Solutions (Truthout)

Thursday, 29 January 2015 00:00 By Almuth Ernsting, Truthout

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

Touting “sci-fi climate solutions” – untested technologies not really scalable to the dimensions of our climate change crisis – dangerously delays the day when we actually reduce greenhouse gas emissions.

Last week, I took my son to Scotland’s Museum of Flight. Its proudest exhibit: a Concorde. To me, it looked stunningly futuristic. “How old,” remarked my son, looking at the confusing array of pre-digital controls in the cockpit. Watching the accompanying video – “Past Dreams of the Future” – it occurred to me that the story of the Concorde stands as a symbol for two of the biggest obstacles to addressing climate change.

The Concorde must rank among the most wasteful ways of guzzling fossil fuels ever invented. No other form of transport is as destructive to the climate as aviation – yet the Concorde burned almost five times as much fuel per person per mile as a standard aircraft. Moreover, by emitting pollutants straight into the lower stratosphere, the Concorde contributed to ozone depletion. At the time of the Concorde’s first test flight in 1969, little was known about climate change and the ozone hole had not yet been discovered. Yet by the time the Concorde was grounded – for purely economic reasons – in 2003, concerns about its impact on the ozone layer had been voiced for 32 years and the Intergovernmental Panel on Climate Change’s (IPCC) first report had been published for 13 years.

The Concorde’s history illustrates how the elites will stop at nothing when pursuing their interests or desires. No damage to the atmosphere and no level of noise-induced misery to those living under Concorde flight paths were treated as bad enough to warrant depriving the richest of a glamorous toy.

If this first “climate change lesson” from the Concorde seems depressing, the second will be even less comfortable for many.

Back in 1969, the UK’s technology minister marveled at Concorde’s promises: “It’ll change the shape of the world; it’ll shrink the globe by half . . . It replaces in one step the entire progress made in aviation since the Wright Brothers in 1903.”

Few would have believed at that time that, from 2003, no commercial flight would reach even half the speed that had been achieved back in the 1970s.

The Concorde remained as fast – yet as inefficient and uneconomical – as it had been from its commercial inauguration in 1976 – despite vast amounts of public and industry investment. The term “Concorde fallacy” entered British dictionaries: “The idea that you should continue to spend money on a project, product, etc. in order not to waste the money or effort you have already put into it, which may lead to bad decisions.”

The lessons for those who believe in overcoming climate change through technological progress are sobering: It’s not written in the stars that every technology dreamed up can be realized, nor that, with enough time and money, every technical problem will be overcome and that, over time, every new technology will become better, more efficient and more affordable.

Yet precisely such faith in technological progress informs mainstream responses to climate change, including the response by the IPCC. At a conference last autumn, I listened to a lead author of the IPCC’s latest assessment report. His presentation began with a depressing summary of the escalating climate crisis and the massive rise in energy use and carbon emissions, clearly correlated with economic growth. His conclusion was highly optimistic: Provided we make the right choices, technological progress offers a future with zero-carbon energy for all, with ever greater prosperity and no need for economic growth to end. This, he illustrated with some drawings of what we might expect by 2050: super-grids connecting abundant nuclear and renewable energy sources across continents, new forms of mass transport (perhaps modeled on Japan’s magnetic levitation trains), new forms of aircraft (curiously reminiscent of the Concorde) and completely sustainable cars (which looked like robots on wheels). The last and most obscure drawing in his presentation was unfinished, to remind us that future technological progress is beyond our capacity to imagine; the speaker suggested it might be a printer printing itself in a new era of self-replicating machines.

These may represent the fantasies of just one of many lead authors of the IPCC’s recent report. But the IPCC’s 2014 mitigation report itself relies on a large range of techno-fixes, many of which are a long way from being technically, let alone commercially, viable. Climate justice campaigners have condemned the IPCC’s support for “false solutions” to climate change. But the term “false solutions” does not distinguish between techno-fixes that are real and scalable, albeit harmful and counterproductive on the one hand, and those that remain in the realm of science fiction, or threaten to turn into another “Concorde fallacy,” i.e. to keep guzzling public funds with no credible prospect of ever becoming truly viable. Let’s call the latter “sci-fi solutions.”

The most prominent, though by no means only, sci-fi solution espoused by the IPCC is BECCS – bioenergy with carbon capture and storage. According to their recent report, the vast majority of “pathways” or models for keeping temperature rise below 2 degrees Celsius rely on “negative emissions.” Although the report included words of caution, pointing out that such technologies are “uncertain” and “associated with challenges and risks,” the conclusion is quite clear: Either carbon capture and storage, including BECCS, is introduced on a very large scale, or the chances of keeping global warming within 2 degrees Celsius are minimal. In the meantime, the IPCC’s chair, Rajendra Pachauri, and the co-chair of the panel’s Working Group on Climate Change Mitigation, Ottmar Edenhofer, publicly advocate BECCS without any notes of caution about uncertainties – referring to it as a proven way of reducing carbon dioxide levels and thus global warming. Not surprisingly therefore, BECCS has even entered the UN climate change negotiations. The recent text, agreed at the Lima climate conference in December 2014 (“Lima Call for Action”), introduces the terms “net zero emissions” and “negative emissions,” i.e. the idea that we can reliably suck large amounts of carbon (those already emitted from burning fossil fuels) out of the atmosphere. Although BECCS is not explicitly mentioned in the Lima Call for Action, the wording implies support for it because it is treated as the key “negative emissions” technology by the IPCC.

If BECCS were to be applied at a large scale in the future, then we would have every reason to be alarmed. According to a scientific review, attempting to capture 1 billion tons of carbon through BECCS (far less than many of the “pathways” considered by the IPCC presume) would require 218 to 990 million hectares of switchgrass plantations (or similar scale plantations of other feedstocks, including trees), 1.6 to 7.4 trillion cubic meters of water a year, and 75 percent more than all the nitrogen fertilizers used worldwide (which currently stands at 1 billion tons according to the “conservative” estimates in many studies). By comparison, just 30 million hectares of land worldwide have been converted to grow feedstock for liquid biofuels so far. Yet biofuels have already become the main cause of accelerated growth in demand for vegetable oils and cereals, triggering huge volatility and rises in the price of wood worldwide. And by pushing up palm oil prices, biofuels have driven faster deforestation across Southeast Asia and increasingly in Africa. As a result of the ethanol boom, more than 6 million hectares of US land has been planted with corn, causing prairies and wetlands to be plowed up. This destruction of ecosystems, coupled with the greenhouse gas intensive use of fertilizers, means that biofuels overall are almost certainly worse for the climate than the fossil fuels they are meant to replace. There are no reasons to believe that the impacts of BECCS would be any more benign. And they would be on a much larger scale.

Capturing carbon takes a lot of energy, hence CCS requires around one-third more fuel to be burned to generate the same amount of energy. And sequestering captured carbon is a highly uncertain business. So far, there have been three large-scale carbon sequestration experiments. The longest-standing of these, the Sleipner field carbon sequestration trial in the North Sea, has been cited as proof that carbon dioxide can be sequestered reliably under the seabed. Yet in 2013, unexpected scars and fractures were found in the reservoir and a lead researcher concluded: “We are saying it is very likely something will come out in the end.” Another one of the supposedly “successful,” if much shorter, trials also raised “interesting questions,” according to the researchers: Carbon dioxide migrated further upward in the reservoir than predicted, most likely because injecting the carbon dioxide caused fractures in the cap rock.

There are thus good reasons to be alarmed about the prospect of large-scale bioenergy with CCS. Yet BECCS isn’t for real.

While the IPCC and world leaders conclude that we really need to use carbon capture and storage, including biomass, here’s what is actually happening: The Norwegian government, once proud of being a global pioneer of CCS, has pulled the plug on the country’s first full-scale CCS project after a scathing report from a public auditor. The Swedish state-owned energy company Vattenfall has shut down its CCS demonstration plant in Germany, the only plant worldwide testing a particular and supposedly promising carbon capture technology. The government of Alberta has dropped its previously enthusiastic support for CCS because it no longer sees it as economically viable.

True, 2014 has seen the opening of the world’s largest CCS power station, after SaskPower retrofitted one unit of their Boundary Dam coal power station in Saskatchewan to capture carbon dioxide. But Boundary Dam hardly confirms the techno-optimist’s hopes. The 100-megawatt unit costs approximately $1.4 billion to build – more than twice the cost of a much larger (non-CCS) 400-megawatt gas power station built by SaskPower in 2009. It became viable thanks only to public subsidies and to a contract with the oil company Cenovus, which agreed to buy the carbon dioxide for the next decade in order to inject it into an oil well to facilitate extraction of more hard to reach oil – a process called enhanced oil recovery (EOR). The supposed “carbon dioxide savings” predictably ignore all of the carbon dioxide emissions from burning that oil. But even with such a nearby oil field suitable for EOR, SaskPower had to make the plant far smaller than originally planned so as to avoid capturing more carbon dioxide than they could sell.

If CCS with fossil fuels is reminiscent of the Concorde fallacy, large-scale BECCS is entirely in the realm of science fiction. The supposedly most “promising technology”has never been tested in a biomass power plant and that has so far proven uneconomical with coal. Add to that the fact that biomass power plants need more feedstock and are less efficient and more expensive to run than coal power plants, and a massive-scale BECCS program becomes even more implausible. And then add to that the question of scale: Sequestering 1 billion tons of carbon a year would produce a volume of highly pressurized liquid carbon dioxide larger than the global volume of oil extracted annually. It would require governments and/or companies stumping up the money to build an infrastructure larger than that of the entire global oil industry – without any proven benefit.

This doesn’t mean that we won’t see any little BECCS projects in niche circumstances. One of these already exists: ADM is capturing carbon dioxide from ethanol fermentation in one of its refineries for use in CCS research. Capturing carbon dioxide from ethanol fermentation is relatively simple and cheap. If there happens to be some half-depleted nearby oil field suitable for enhanced oil recovery, some ethanol “CCS” projects could pop up here and there. But this has little to do with a “billion ton negative emissions” vision.

BECCS thus appears as one, albeit a particularly prominent, example of baseless techno-optimism leading to dangerous policy choices. Dangerous, that is, because hype about sci-fi solutions becomes a cover for the failure to curb fossil fuel burning and ecosystem destruction today.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.

Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).

Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?

Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.

Padrões ano a ano

Média e quartis do estoque durante o ano

Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

scientia probabitlaboratório de dados essenciais

O que esperar da ciência em 2015 (Zero Hora)

Apostamos em cinco coisas que tendem a aparecer neste ano

19/01/2015 | 06h01

O que esperar da ciência em 2015 SpaceX/Youtube
Foto: SpaceX/Youtube

Em 2014, a ciência conseguiu pousar em um cometa, descobriu que estava errada sobre a evolução genética das aves, revelou os maiores fósseis da história. Miguel Nicolelis apresentou seu exoesqueleto na Copa do Mundo, o satélite brasileiro CBERS-4, em parceria com a China, foi ao espaço com sucesso, um brasileiro trouxe a principal medalha da matemática para casa.

Mas e em 2015, o que veremos? Apostamos em cinco coisas que poderão aparecer neste ano.

Foguetes reusáveis

Se queremos colonizar Marte, não adianta passagem só de ida. Esses foguetes, capazes de ir e voltar, são a promessa para transformar o futuro das viagens espaciais. Veremos se a empresa SpaceX, que já está nessa, consegue.

Robôs em casa

Os japoneses da Softbank começam a vender, em fevereiro, um robô humanoide chamado Pepper. Ele usa inteligência artificial para reconhecer o humor do dono e fala quatro línguas. Apesar de ser mais um ajudante do que um cara que faz, logo logo aprenderá novas funções.

Universo invisível

Grande Colisor de Hádrons vai voltar a funcionar em março e terá potência duas vezes maior de quebrar partículas. Uma das possibilidades é que ele ajude a descobrir novas superpartículas que, talvez, componham a matéria escura. Seria o primeiro novo estado da matéria descoberto em um século.

Cura para o ebola

Depois da crise de 2014, pode ser que as vacinas para o ebola comecem a funcionar e salvem muitas vidas na África. Vale o mesmo para a aids. O HIV está cercado, esperamos que a ciência finalmente o vença neste ano.

Discussões climáticas

2014 foi um dos mais quentes da história e, do jeito que a coisa vai, 2015 seguirá a mesma trilha. Em dezembro, o mundo vai discutir um acordo para tentar reverter o grau de emissões de gases em Paris. São medidas para ser implementadas a partir de 2020. Que sejam sensatos nossos líderes.

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: : A Ballistic Monte Carlo Approximation of π

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]


12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

USP lança projeto “Chuva Online” (IAG)

Com mini radares meteorológicos, Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) testa tecnologia de baixo custo para pequenas e grandes cidades

A apenas alguns dias do início do verão, a USP está lançando o projeto Chuva Online, que conta com dois mini radares meteorológicos instalados em prédios da Universidade de São Paulo. O projeto é encabeçado pelo Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) da USP, sob a coordenação do professor Carlos Morales.

A cerimônia de lançamento do projeto acontece dia 16 de dezembro, às 10:00, na Escola de Artes, Ciências e Humanidades  (EACH) da USP, onde um dos mini radares foi instalado na caixa d’água da Escola. O outro equipamento foi instalado no topo da torre do Pelletron, no Instituto de Física (IF), na Cidade Universitária.

Um dos objetivos do projeto é testar uma nova tecnologia de monitoramento meteorológico capaz de monitorar a chuva com alta resolução espacial e temporal, muito útil para cidades de pequeno e médio porte. Os mini radares foram configurados para terem um alcance de 21 quilômetros com uma resolução de 90 metros e varreduras a cada 5 minutos.

“É uma tecnologia simples que poderá ser adotada por várias cidades  e por empresas que precisam saber onde está chovendo e se existe probabilidade de ocorrer alagamentos em ruas e bairros, por exemplo”, explica o professor Carlos Morales (IAG). Cada equipamento tem custo de cerca de 350 mil reais, enquanto um radar meteorológico convencional pode custar até 5 milhões de reais. Outra vantagem é que o equipamento, com peso de 100 kg, é bastante portátil e pode ser alimentado pela rede elétrica comum.

Juntos, os dois mini radares coletarão informações meteorológicas da Região Metropolitana de São Paulo. Os dados estarão disponíveis em tempo real e online, no portal do projeto que será apresentado durante a inauguração. Na EACH, dois monitores de alta definição exibirão as informações obtidas pelo radar, enquanto no IAG esses dados serão mostrados em um videowall.

O Chuva Online é um dos projetos que compõem o Sistema Integrado de Gestão da Infraestrutura Urbana (SIGINURB) da Prefeitura do Campus da Capital da USP (PUSP-C). Coordenado pelo professor Sidnei Martini (Escola Politécnica da USP), o SIGINURB busca aperfeiçoar a operação da infraestrutura urbana. Com o Chuva Online, a Prefeitura do Campus da Capital testará tecnologias que subsidiam o gerenciamento de pequenas cidades.

Ambos os projetos interagem com ações do Centro de Estudos e Pesquisas em Desastres da USP (CEPED/USP).  Com a aprovação do projeto PRÓ-ALERTA do CEPED USP pela Coordenadoria de Aperfeiçoamento em Pessoal de Nível Superior (CAPES), coordenado pelos professores Carlos Morales e Hugo Yoshizaki, a rede do Chuva Online também será utilizada na capacitação de especialistas do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden) e da Defesa Civil do Estado de São Paulo. Com esses radares e essa tecnologia, os cursos de graduação e pós-graduação da USP passam a contar com ferramental importante para capacitar alunos na área de meteorologia por radar, além de viabilizar o desenvolvimento de aplicativos e fazer previsão de tempo de curtíssimo prazo.

O mini radar no IF/USP foi instalado por meio de projeto do IAG com a PUSP-C. Na EACH, foi feita uma parceria do IAG com a empresa Climatempo e a Fundespa. Essa rede de mini radares também passará a receber dados de um terceiro radar meteorológico, a ser instalado no Parque da Água Funda, onde o IAG mantém sua Estação Meteorológica. Esse terceiro radar será operado pela Fundação Centro Tecnológico de Hidráulica (FCTH), com apoio do governo francês, e está previsto para ser instalado em fevereiro de 2015.

Durante o evento de inauguração será apresentado ao público o portal Chuva Online e suas funcionalidades em mapas geo-referenciados com alta resolução, além de detalhes sobre os projetos Chuva Online, SIGINURB e CEPED da USP e da Climatempo.

Para mais informações, os interessados podem entrar em contato com o professor Carlos Morales no e-mail: e telefone (11) 3091-4713.


Jovens ‘biohackers’ instalam chips na mão para abrir a porta de casa (Folha de S.Paulo)



07/12/2014 02h00

Paulo Cesar Saito, 27, não usa mais chave para entrar em seu apartamento, em Pinheiros. Desde o mês passado, a porta “reconhece” quando ele chega. Basta espalmar a mão na frente da fechadura e ela se abre.

A mágica está no chip que ele próprio (com a ajuda de uma amiga que estuda medicina) implantou na mão. Pouco maior que um grão de arroz, o chip tem tecnologia de reconhecimento por radiofrequência. Quando está próximo, uma base na porta desencadeia uma ação pré-programada. No caso, abrir a fechadura.

Instalar modificações tecnológicas no próprio corpo é uma das atividades de um movimento que surgiu em 2008 nos EUA e é chamado mundo afora de biohacking: se envolver com experimentos em biologia fora de grandes laboratórios.

São basicamente os mesmos nerds que desenvolvem geringonças eletrônicas na garagem e se aprofundam no conhecimento de sistemas de informática. Só que agora eles se aventuram no campo da biotecnologia.

Os grupos de DIYBio (do-it-yourself biology, ou “biologia faça-você-mesmo”) importam conceitos do movimento hacker: acesso à informação, divulgação do conhecimento e soluções simples e baratas para melhorar a vida. E são abertos para cientistas amadores —jovens na graduação ou pessoas não necessariamente formadas em biologia.

Saito, por exemplo, começou a cursar física e meteorologia na USP, mas agora se dedica somente à sua start-up na área de tecnologia. O seu envolvimento com o biohacking se resume a modificações corporais –ele também vai instalar um pequeno ímã no dedo. “Como trabalho com equipamentos eletrônicos, tomo muitos choques. O ímã faz você sentir campos magnéticos, evitando o choque”, diz.

Já seu sócio, Erico Perrella, 23, graduando em química ambiental na USP, é um dos principais entusiastas da DIYBio em São Paulo. Ele também tem uma microcicatriz do chip que instalou junto com o amigo. O aparelhinho tem 12 mm de comprimento e uma cobertura biocompatível para que não seja rejeitado pelo corpo. A proteção impede que o chip se mova de lugar e, por não grudar no tecido interno, é de fácil remoção. Perrella também é um dos organizadores de um grupo de DIYBio que se encontra toda segunda-feira.

O movimento está começando na capital paulista, mas mundialmente já chama a atenção —há laboratórios em cerca de 50 cidades, a maioria nos EUA e na Europa. O grupo de Perrella trabalha para montar em São Paulo o primeiro “wetlab” de DIYBio: um espaço estéril, com equipamentos específicos para materiais biológicos.

Eles se reúnem no Garoa Hacker Clube, espaço para entusiastas em tecnologia. O local, no entanto, tem infraestrutura voltada para projetos com hardware, eletrônica etc. “Para um wetlab’ é preciso uma área limpa, que parece mais uma cozinha do que uma oficina”, diz o estudante de química Otto Werner Heringer, 24, integrante do grupo. “O Garoa já tem uma área assim, nossa ideia é levar e deixar mais equipamentos [no local]”

Aproveitar espaços “geeks” é comum no movimento. O Open Wetlab de Amsterdam, por exemplo, começou como parte da Waag Society, um instituto sem fins lucrativos que promove arte, ciência e tecnologia.

Certos experimentos exigem equipamentos complexos, que podem custar milhares de dólares. “A solução é montar algumas coisas e consertar equipamentos velhos que a universidade iria jogar fora”, explica Heringer.

Grande parte dos biohackers se dedica mais à montagem dos equipamentos do que a experimentos. Heringer já fez uma centrífuga usando uma peça impressa em 3D encaixada em uma furadeira. Agora está montando um contador de células. Ajudado por amigos, Perrella criou biorreatores com material reciclado de uma mineradora.

Para esses jovens entusiasmados, são grandes as vantagens de fazer ciência fora da academia ou da indústria.

Longe do controle minucioso da universidade, é possível desenvolver projetos sem a aprovação de diversos comitês e conselhos. “O ambiente [acadêmico] é muito engessado. Você fica desestimulado”, diz Heringer.

O trabalho dos amadores acaba até contribuindo para a ciência “formal”. Heringer está criando com amigos uma pipetadora automática no InovaLab da Escola Politécnica da USP baseada em um projeto de DIYBio e financiada por um fundo de ex-alunos. “A gente nunca conseguiria financiamento pelos meios normais da USP. E, se conseguisse, ia demorar!”, diz ele.


O amplo acesso gera preocupações: laboratórios amadores não poderiam criar organismos nocivos? Defensores dizem que, para quem pratica DIYBio, interessa manter tudo dentro dos padrões de segurança –se algo der errado, o controle vai aumentar e tornar a vida mais difícil.

Não existe no Brasil uma regulação para laboratórios amadores. Nos EUA, o FBI monitora o movimento e há restrições ao uso de alguns materiais, porém não há regulação específica.

O cientista francês Thomas Landrain, que estuda o movimento, argumenta em sua pesquisa que os espaços ainda não têm sofisticação suficiente para gerar problemas.

Mas, apesar da limitação técnica, os laboratórios permitem inúmeras possibilidades. “Quem se dedica tem uma crença profunda no potencial transformador dessas novas tecnologias”, explica Perrella, que tem um projeto de mineração com uso de bactérias. Há grupos que focam a saúde, criando sensores de contaminação em alimentos ou “mapas biológicos” que podem monitorar a evolução de doenças.

É possível trabalhar com DNA Barcode, método que identifica a qual espécie pertence um tecido. “Daria para checar qual é a carne da esfirra do Habib’s”, diz Perrella, citando um experimentocom análise de carne que já está sendo feito no OpenWetlab, em Amsterdam. Dá até para descobrir qual é o vizinho que não recolhe o cocô do cachorro. Foi o que fez o alemão Sascha Karberg, comparando pelo de cães da vizinhança com o “presente” à sua porta. O método usado em projetos como esse pode ser encontrado por outros “biohackers”. O risco é aumentar as brigas entre vizinhos.

Geoengineering Gone Wild: Newsweek Touts Turning Humans Into Hobbits To Save Climate (Climate Progress)


Matamata, New Zealand - "Hobbiton," site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings.

A Newsweek cover story touts genetically engineering humans to be smaller, with better night vision (like, say, hobbits) to save the Earth. Matamata, New Zealand, or “Hobbiton,” site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings. CREDIT: SHUTTERSTOCK

Newsweek has an entire cover story devoted to raising the question, “Can Geoengineering Save the Earth?” After reading it, though, you may not realize the answer is a resounding “no.” In part that’s because Newsweek manages to avoid quoting even one of the countless general critics of geoengineering in its 2700-word (!) piece.

20141205cover600-x-800Geoengineering is not a well-defined term, but at its broadest, it is the large-scale manipulation of the Earth and its biosphere to counteract the effects of human-caused global warming. Global warming itself is geo-engineering — originally unintentional, but now, after decades of scientific warnings, not so much.

I have likened geoengineering to a dangerous, never tested, course of chemotherapy prescribed to treat a condition curable through diet and exercise — or, in this case, greenhouse gas emissions reduction. If your actual doctor were to prescribe such a treatment, you would get another doctor.

The media likes geoengineering stories because they are clickbait involving all sorts of eye-popping science fiction (non)solutions to climate change that don’t actually require anything of their readers (or humanity) except infinite credulousness. And so Newsweek informs us that adorable ants might solve the problem or maybe phytoplankton can if given Popeye-like superstrength with a diet of iron or, as we’ll see, maybe we humans can, if we allow ourselves to be turned into hobbit-like creatures. The only thing they left out was time-travel.

The author does talk to an unusually sober expert supporter of geoengineering, climatologist Ken Caldeira. Caldeira knows that of all the proposed geoengineering strategies, only one makes even the tiniest bit of sense — and he knows even that one doesn’t make much sense. That would be the idea of spewing vast amounts of tiny particulates (sulfate aerosols) into the atmosphere to block sunlight, mimicking the global temperature drops that follow volcanic eruptions. But they note the caveat: “that said, Caldeira doesn’t believe any method of geoengineering is really a good solution to fighting climate change — we can’t test them on a large scale, and implementing them blindly could be dangerous.”

Actually, it’s worse than that. As Caldeira told me in 2009, “If we keep emitting greenhouse gases with the intent of offsetting the global warming with ever increasing loadings of particles in the stratosphere, we will be heading to a planet with extremely high greenhouse gases and a thick stratospheric haze that we would need to maintain more-or-less indefinitely. This seems to be a dystopic world out of a science fiction story.”

And the scientific literature has repeatedly explained that the aerosol-cooling strategy — or indeed any large-scale effort to manipulate sunlight — is very dangerous. Just last month, the UK Guardian reported that the aerosol strategy “risks ‘terrifying’ consequences including droughts and conflicts,” according to recent studies.

“Billions of people would suffer worse floods and droughts if technology was used to block warming sunlight, the research found.”

And remember, this dystopic world where billions suffer is the best geoengineering strategy out there. And it still does nothing to stop the catastrophic acidification of the ocean.

There simply is no rational or moral substitute for aggressive greenhouse gas cuts. But Newsweek quickly dispenses with that supposedly “seismic shift in what has become a global value system” so it can move on to its absurdist “reimagining of what it means to be human”:

In a paper released in 2012, S. Matthew Liao, a philosopher and ethicist at New York University, and some colleagues proposed a series of human-engineering projects that could make our very existence less damaging to the Earth. Among the proposals were a patch you can put on your skin that would make you averse to the flavor of meat (cattle farms are a notorious producer of the greenhouse gas methane), genetic engineering in utero to make humans grow shorter (smaller people means fewer resources used), technological reengineering of our eyeballs to make us better at seeing at night (better night vision means lower energy consumption)….

Yes, let’s turn humans into hobbits (who are “about 3 feet tall” and “their night vision is excellent“). Anyone can see that could easily be done for billions of people in the timeframe needed to matter. Who could imagine any political or practical objection?

Now you may be thinking that Newsweek can’t possibly be serious devoting ink to such nonsense. But if not, how did the last two paragraphs of the article make it to print:

Geoengineering, Liao argues, doesn’t address the root cause. Remaking the planet simply attempts to counteract the damage that’s been done, but it does nothing to stop the burden humans put on the planet. “Human engineering is more of an upstream solution,” says Liao. “You get right to the source. If we’re smaller on average, then we can have a smaller footprint on the planet. You’re looking at the source of the problem.”

It might be uncomfortable for humans to imagine intentionally getting smaller over generations or changing their physiology to become averse to meat, but why should seeding the sky with aerosols be any more acceptable? In the end, these are all actions we would enact only in worst-case scenarios. And when we’re facing the possible devastation of all mankind, perhaps a little humanity-wide night vision won’t seem so dramatic.

Memo to Newsweek: We are already facing the devastation of all mankind. And science has already provided the means of our “rescue,” the means of reducing “the burden humans put on the planet” — the myriad carbon-free energy technologies that reduce greenhouse gas emissions. Perhaps LED lighting would make a slightly more practical strategy than reengineering our eyeballs, though perhaps not one dramatic enough to inspire one of your cover stories.

As Caldeira himself has said elsewhere of geoengineering, “I think that 99% of our effort to avoid climate change should be put on emissions reduction, and 1% of our effort should be looking into these options.” So perhaps Newsweek will consider 99 articles on the real solutions before returning to the magical thinking of Middle Earth.

Cidade submarina projetada no Japão pode abrigar 5 mil moradores (Portal do Meio Ambiente)


Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Uma empresa de construção japonesa diz que, no futuro, os seres humanos podem viver em grandes complexos habitacionais submarinos.

Pelo projeto, cerca de 5 mil pessoas poderiam viver e trabalhar em modernas vesões da cidade perdida da Atlântida.

As construções teriam hotéis, espaços residenciais e conjuntos comerciais, informou o site Busines Insider.

A grande globo que flutua na superfície do mar, mas pode ser submerso em mau tempo, seria o centro de uma estrutura espiral gigantesca que mergulha a profundidades de até 4 mil metros.

A espiral formaria um caminho 15 quilômetros de um edifício até o fundo do oceano, o que poderia servir como uma fábrica para aproveitar recursos como metais e terras raras.

Os visionários da construtora Shimizu dizem que seria possível usar micro-organismos para converter dióxido de carbono capturado na superfície em metano.

Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Energia. O conceito foi desenvolvido em conjunto com várias organizações, incluindo a Universidade de Tóquio e a agência japonesa de ciência e tecnologia.

A grande diferença de temperaturas da água entre o topo e o fundo do mar poderia ser usada para gerar energia.

A construtora Shimizu diz que a cidade submarina custaria cerca de três trilhões de ienes (ou US$ 25 bilhões), e toda a tecnologia poderia estar disponível em 2030.

A empresa já projetou uma metrópole flutuante e um anel de energia solar ao redor da lua.

Fonte: Estadão.

Em site, indígenas ensinam sua história e derrubam preconceitos (Estadão)

Índio Educa publica material didático multimídia sobre histórias, tradições e lutas de povos do Brasil

Sempre que o índio xucuru Casé Angatu deixa Ilhéus, na Bahia, para oferecer em São Paulo um curso sobre culturas indígenas, ele ouve de algum participante: “Vocês comem pessoas?”. De tão acostumado a ser lembrado pelos estereótipos, Casé ri, disfarça e aproveita a oportunidade para apresentar ao grupo, na frente do Pátio do Colégio, o projeto Índio Educa. No site, indígenas de todo o Brasil produzem material didático multimídia sobre suas histórias, tradições e lutas.

Veja o texto na íntegra em:,em-site-indigenas-ensinam-sua-historia,1601271

(Estado S.Paulo)

Manifestações neozapatistas (Fapesp)


Para além das reivindicações contra os gastos públicos na organização da Copa do Mundo e por melhorias no transporte, na saúde e educação, as manifestações de junho de 2013 no Brasil ressaltaram uma expressão simbólica das articulações do chamado “net-ativismo”, expressão-chave de um estudo financiado pela FAPESP. No vídeo produzido pela equipe de Pesquisa FAPESP, o sociólogo Massimo Di Felice, do Centro de Pesquisa Atopos da Escola de Comunicações e Artes da Universidade de São Paulo (ECA-USP) e coordenador do estudo, fala sobre a qualidade e o lugar das ações net-ativistas e como as redes digitais e os novos dispositivos móveis de conectividade estão mudando práticas de participação social no Brasil e no mundo.

High-tech mirror beams heat away from buildings into space (Science Daily)


November 26, 2014


Stanford School of Engineering


Engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight, and it sends heat from inside the structure directly into space as infrared radiation.


Stanford engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight and sends heat from inside the structure directly into space as infrared radiation – represented by reddish rays. Credit: Illustration: Nicolle R. Fuller, Sayo-Art LLC

Stanford engineers have invented a revolutionary coating material that can help cool buildings, even on sunny days, by radiating heat away from the buildings and sending it directly into space.

A new ultrathin multilayered material can cool buildings without air conditioning by radiating warmth from inside the buildings into space while also reflecting sunlight to reduce incoming heat.

A team led by electrical engineering Professor Shanhui Fan and research associate Aaswath Raman reported this energy-saving breakthrough in the journal Nature.

The heart of the invention is an ultrathin, multilayered material that deals with light, both invisible and visible, in a new way.

Invisible light in the form of infrared radiation is one of the ways that all objects and living things throw off heat. When we stand in front of a closed oven without touching it, the heat we feel is infrared light. This invisible, heat-bearing light is what the Stanford invention shunts away from buildings and sends into space.

Of course, sunshine also warms buildings. The new material, in addition to dealing with infrared light, is also a stunningly efficient mirror that reflects virtually all of the incoming sunlight that strikes it.

The result is what the Stanford team calls photonic radiative cooling — a one-two punch that offloads infrared heat from within a building while also reflecting the sunlight that would otherwise warm it up. The result is cooler buildings that require less air conditioning.

“This is very novel and an extraordinarily simple idea,” said Eli Yablonovitch, a professor of engineering at the University of California, Berkeley, and a pioneer of photonics who directs the Center for Energy Efficient Electronics Science. “As a result of professor Fan’s work, we can now [use radiative cooling], not only at night but counter-intuitively in the daytime as well.”

The researchers say they designed the material to be cost-effective for large-scale deployment on building rooftops. Though it’s still a young technology, they believe it could one day reduce demand for electricity. As much as 15 percent of the energy used in buildings in the United States is spent powering air conditioning systems.

In practice the researchers think the coating might be sprayed on a more solid material to make it suitable for withstanding the elements.

“This team has shown how to passively cool structures by simply radiating heat into the cold darkness of space,” said Nobel Prize-winning physicist Burton Richter, professor emeritus at Stanford and former director of the research facility now called the SLAC National Accelerator Laboratory.

A warming world needs cooling technologies that don’t require power, according to Raman, lead author of the Nature paper. “Across the developing world, photonic radiative cooling makes off-grid cooling a possibility in rural regions, in addition to meeting skyrocketing demand for air conditioning in urban areas,” he said.

Using a window into space

The real breakthrough is how the Stanford material radiates heat away from buildings.

As science students know, heat can be transferred in three ways: conduction, convection and radiation. Conduction transfers heat by touch. That’s why you don’t touch a hot oven pan without wearing a mitt. Convection transfers heat by movement of fluids or air. It’s the warm rush of air when the oven is opened. Radiation transfers heat in the form of infrared light that emanates outward from objects, sight unseen.

The first part of the coating’s one-two punch radiates heat-bearing infrared light directly into space. The ultrathin coating was carefully constructed to send this infrared light away from buildings at the precise frequency that allows it to pass through the atmosphere without warming the air, a key feature given the dangers of global warming.

“Think about it like having a window into space,” Fan said.

Aiming the mirror

But transmitting heat into space is not enough on its own.

This multilayered coating also acts as a highly efficient mirror, preventing 97 percent of sunlight from striking the building and heating it up.

“We’ve created something that’s a radiator that also happens to be an excellent mirror,” Raman said.

Together, the radiation and reflection make the photonic radiative cooler nearly 9 degrees Fahrenheit cooler than the surrounding air during the day.

The multilayered material is just 1.8 microns thick, thinner than the thinnest aluminum foil.

It is made of seven layers of silicon dioxide and hafnium oxide on top of a thin layer of silver. These layers are not a uniform thickness, but are instead engineered to create a new material. Its internal structure is tuned to radiate infrared rays at a frequency that lets them pass into space without warming the air near the building.

“This photonic approach gives us the ability to finely tune both solar reflection and infrared thermal radiation,” said Linxiao Zhu, doctoral candidate in applied physics and a co-author of the paper.

“I am personally very excited about their results,” said Marin Soljacic, a physics professor at the Massachusetts Institute of Technology. “This is a great example of the power of nanophotonics.”

From prototype to building panel

Making photonic radiative cooling practical requires solving at least two technical problems.

The first is how to conduct the heat inside the building to this exterior coating. Once it gets there, the coating can direct the heat into space, but engineers must first figure out how to efficiently deliver the building heat to the coating.

The second problem is production. Right now the Stanford team’s prototype is the size of a personal pizza. Cooling buildings will require large panels. The researchers say large-area fabrication facilities can make their panels at the scales needed.

The cosmic fridge

More broadly, the team sees this project as a first step toward using the cold of space as a resource. In the same way that sunlight provides a renewable source of solar energy, the cold universe supplies a nearly unlimited expanse to dump heat.

“Every object that produces heat has to dump that heat into a heat sink,” Fan said. “What we’ve done is to create a way that should allow us to use the coldness of the universe as a heat sink during the day.”

In addition to Fan, Raman and Zhu, this paper has two additional co-authors: Marc Abou Anoma, a master’s student in mechanical engineering who has graduated; and Eden Rephaeli, a doctoral student in applied physics who has graduated.

This research was supported by the Advanced Research Project Agency-Energy (ARPA-E) of the U.S. Department of Energy.

Story Source:

The above story is based on materials provided by Stanford School of Engineering. The original article was written by Chris Cesare. Note: Materials may be edited for content and length.

Journal Reference:

  1. Aaswath P. Raman, Marc Abou Anoma, Linxiao Zhu, Eden Rephaeli, Shanhui Fan. Passive radiative cooling below ambient air temperature under direct sunlight. Nature, 2014; 515 (7528): 540 DOI: 10.1038/nature13883

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera


18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.

Worlding Anthropologies of Technosciences? (

October 28th, 2014, by

The past 4S meeting in Buenos Aires made visible the expansion of STS to various regions of the globe. Those of us who happened to be at the 4S meeting at University of Tokyo four years ago will remember the excitement of having the opportunity to work side-by-side with STS scholars from East and Southeast Asia. The same opportunity for worlding STS was opened again this past summer in Buenos Aires.

In order to help increase diversity of perspectives, Sharon Traweek and I organized a 4S panel on the relationships between STS and anthropology with a focus on the past, present, and future of the exchange among national traditions. The idea came out of our conversations about the intersections between science studies and the US anthropology of the late 1980’s with the work of CASTAC pioneers such as Diana Forsythe, Gary Downey, Joseph Dumit, David Hakken, David Hess, and Sharon Traweek, among several others who helped to establish the technosciences as legitimate domains of anthropological inquiry. It was not an easy battle, as Chris Furlow’s post on the history of CASTAC reminded us, but the results are undeniably all around us today. Panels on anthropology of science and technology can always be found at professional meetings. Publications on science and technology have space in various journals and the attention of university publishers these days.

For our panel this year we had the opening remarks of Gary Downey who, after reading our proposal aloud, emphasized the importance of advancing a cultural critique of science and technology through a situated, grounded stance. Quoting Marcus and Fischer’s “Anthropology as Cultural Critique” (1986) he emphasized that anthropology of science and technology could not dispense with the reflection upon the place, the situation, and the positioning of the anthropologist. Downey described his own positioning as an anthropologist and critical participant in engineering. Two decades ago Downey challenged the project of “anthropology as cultural critique” to speak widely to audiences outside anthropology and to practice anthropology as cultural critique, as suggested by the title of his early AAA paper, “Outside the Hotel”.

Yet “Anthropology as Cultural Critique” represented, he pointed out, one of the earliest reflexive calls in US anthropology for us to rethink canonical fieldwork orientations and our approach to the craft of ethnography with its representational politics. Downey and many others who invented new spaces to advance critical agendas in the context of science and technology did so by adding to the identity of the anthropologist other identities and responsibilities, such as that of former mechanical engineer, laboratory physicist, theologian, and experimenter of alternative forms of sociality, etc. These overlapping and intersecting identities opened up a whole field of possibilities for renewed modes of inquiry which, after “Anthropology as Cultural Critique”, consisted, as Downey suggested, in the juxtaposition of knowledge, forms of expertise, positionalities, and commitments. This is where we operate as STS scholars: at intersecting research areas, bridging “fault lines” (as Traweek’s felicitous expression puts it), and doing anthropology with and not without anthropologists.

The order of presentations for our panel was defined in a way to elicit contrasts and parallels between different modes of inquiry, grounded in different national anthropological traditions. The first session had Marko Monteiro (UNICAMP), Renzo Taddei (UNIFESP), Luis Felipe R. Murillo (UCLA), and Aalok Khandekar (Maastricht University) as presenters and Michael M. J. Fischer (MIT) as commentator. Marko Monteiro, an anthropologist working for an interdisciplinary program in science and technology policy in Brazil addressed questions of scientific modeling and State policy regarding the issue of deforestation in the Amazon. His paper presented the challenges of conducting multi-sited ethnography alongside multinational science collaborations, and described how scientific modeling for the Amazalert project was designed to accommodate natural and sociocultural differences with the goal of informing public policy. In the context of his ethnographic work, Monteiro soon found himself in a double position as a panelist expert and as an anthropologist interested in how different groups of scientists and policy makers negotiate the incorporation of “social life” through a “politics of associations.”

Similarly to Monteiro’s positioning, Khandekar benefited in his ethnographic work for being an active participant and serving as the organizer of expert panels involving STS scholars and scientists to design nanotechnology-based development programs in India. Drawing from Fischer’s notion of “third space”, Khandekar addressed how India could be framed productively as such for being a fertile ground for conceptual work where cross-disciplinary efforts have articulated humanities and technosciences under the rubric of innovation. Serving as a knowledge broker for an international collaboration involving India, Kenya, South Africa, and the Netherlands on nanotechnology, Khandekar had first-hand experience in promoting “third spaces” as postcolonial places for cross-disciplinary exchange through story telling.

Shifting the conversation to the context of computing and political action, Luis Felipe R. Murillo’s paper described a controversy surrounding the proposal of a “feminist programming language” and discussed the ways in which it provides access to the contemporary technopolitical dynamics of computing. The feminist programming language parody served as an entry point to analyze how language ideologies render symbolic boundaries visible, highlighting fundamental aspects of socialization in the context of computing in order to reproduce concepts and notions of the possible, logical, and desirable technical solutions. In respect to socioeconomic and political divisions, he suggested that feminist approaches in their intersectionality became highly controversial for addressing publicly systemic inequalities that are transversal to the context of computing and characterize a South that is imbricated in the North of “big computing” (an apparatus that encompasses computer science, information technology industries, infrastructures, and cultures with their reinvented peripheries within the global North and South).

Renzo Taddei recasted the debate regarding belief in magic drawing from a long lasting thread of anthropological research on logical reasoning and cultural specificity. Taddei opened up his take on our conversation with the assertion that to conduct ethnography on witchcraft assuming that it does not exist is fundamentally ethnocentric. This observation was meant to take us the core of his concerns regarding climate sciences vis-à-vis traditional Brazilian forms of forecasting from Sertão, a semi-arid and extremely impoverished area of the Northeast of Brazil. He then proceeded to discuss magical manipulation of the atmosphere from native and Afro-Brazilian perspectives in Brazil.

For the second day of our panel, we had papers by Kim Fortun (RPI), Mike Fortun (RPI), Sharon Traweek (UCLA) and the commentary of Claudia Fonseca (UFRGS) whose long-term contributions to study of adoption, popular culture, science and human rights in Brazil has been highly influential. In her paper, Kim Fortun addressed the double bind of expertise, the in-between of competence and hubris, structural risk and unpredictability of the very infrastructures experts are called upon to take responsibility. Fortun’s call was for a mode of interaction and engagement among science and humanities scholars oriented toward friendship and hospitality as well as commitment for our technoscientific futures under the aegis of late industrialism. “Ethnographic insight”, according to Fortun, “can loop back into the world” through the means of creative pedagogies which are attentive to the fact that science practitioners and STS scholars mobilize different analytic lenses while speaking through and negotiating with distinct discursive registers in the context of international collaborations. Our assumptions of what is conceptually shared should not anticipate what is to be seen or forged in the context of our international exchange, since what is foregrounded in discourse always implicates one form or another of erasure. The image Fortun suggested for us to think with is not that of a network, but that of a kaleidoscope in which the complexity of disasters can be seen across multiple dimensions and scales in their imbrication at every turn.

In his presentation, Michael Fortun questioned the so-called “ontological turn” to recast the “hauntological” dimensions of our research practices vis-à-vis those of our colleagues in the biosciences, that is, to account for the imponderables of scientific and anthropological languages and practices through the lens of a poststructural understanding of the historical functioning of language. In his study of asthma, Fortun attends to multiple perspectives and experiences with asthma across national, socioeconomic, scientific and technical scales. In the context of his project “The Asthma Files”, he suggests, alongside Kim Fortun, hospitality and friendship as frames for engaging instead of disciplining the contingency of ethnographic encounters and ethnographic projects. For future collaborations, two directions are suggested: 1) investigating and experimenting with modes of care and 2) designing collaborative digital platforms for experimental ethnography. The former is related to the scientists care for their instruments, methods, theories, intellectual reproduction, infrastructures, and problems in their particular research fields, while the latter poses the question of care among ourselves and the construction of digital platforms to facilitate and foster collaboration in anthropology.

This panel was closed with Sharon Traweek’s paper on multi-scalar complexity of contemporary scientific collaborations, based on her current research on data practices and gender imbalance in astronomy. Drawing from concepts of meshwork and excess proposed by researchers with distinct intellectual projects such as Jennifer McWeeny, Arturo Escobar, Susan Paulson, and Tim Ingold, Traweek discussed billion-dollar science projects which involve multiple research communities clustered around a few recent research devices and facilities, such as the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the International Thermonuclear Experimental Reactor (ITER) in France. In the space of ongoing transformations of big science toward partially-global science, women and ethnic minorities are building meshworks as overlapping networks in their attempts to build careers in astronomy. Traweek proposed a revision of the notion of “enrollment” to account for the ways in which mega projects in science are sustained for decades of planning, development, construction, and operation at excessive scales which require more than support and consensus. Mega projects in the technosciences are, in Traweek’s terms, “over-determined collages that get built and used” by international teams with “glocal” structures of governance and funding.

In his concluding remarks Michael M. J. Fischer addressed the relationship between anthropology and STS through three organizing axes: time, topic, and audiences. As a question of time, a quarter century has passed for the shared history of STS and anthropology and probing questions have been asked and explored in the technosciences in respect to its apparatuses, codes, languages, life cycle of machines, educational curricula, personal and technical trajectories, which is well represented in one of the foundational texts of our field, Traweek’s “Beamtimes and Lifetimes” (1988). Traweek has helped establish a distinctive anthropological style “working alongside scientists and engineers through juxtaposition not against them.” In respect to the relationships between anthropology and STS, Fischer raised the question of pedagogies as, at once, a prominent form of engagement in the technosciences as well as an anthropological mode of engagement with the technosciences. The common thread connecting all the panel contributions was the potential for new pedagogies to emerge with the contribution of world anthropologies of sciences and technologies. That is, in the space of socialization of scientists, engineers, and the public, space of the convention, as well as invention, and knowledge-making, all the presenters addressed the question of how to advance an anthropology of science and technology with forms of participation, as Fischer suggests, as productive critique.

Along similar lines, Claudia Fonseca offered closing remarks about her own trajectory and the persistence of national anthropological traditions informing our cross-dialogs and border crossings. Known in Brazil as an “anthropologist with an accent”, an anthropologist born in the US, trained in France, and based in Brazil for the most part of her academic life, she cannot help but emphasize the style and forms of engagement that are specific to Brazilian anthropology which has a tradition of conducting ethnography at home. The panel served, in sum, for the participants to find a common thread connecting a rather disparate set of papers and for advancing a form of dialogue across national traditions and modes of engagement which is attentive to local political histories and (national) anthropological trajectories. As suggested by Michael Fortun, we are just collectively conjuring – with much more empiria than magic – a new beginning in the experimental tradition for world anthropologies of sciences and technologies.

Latour on digital methods (Installing [social] order)


In a fascinating, apparently not-peer-reviewed non-article available free online here, Tommaso Venturini and Bruno Latour discuss the potential of “digital methods” for the contemporary social sciences.

The paper summarizes, and quite nicely, the split of sociological methods to the statistical aggregate using quantitative methods (capturing supposedly macro-phenomenon) and irreducibly basic interactions using qualitative methods (capturing supposedly micro-phenomenon). The problem is that neither of which aided the sociologist in capture emergent phenomenon, that is, capturing controversies and events as they happen rather than estimate them after they have emerged (quantitative macro structures) or capture them divorced from non-local influences (qualitative micro phenomenon).

The solution, they claim, is to adopt digital methods in the social sciences. The paper is not exactly a methodological outline of how to accomplish these methods, but there is something of a justification available for it, and it sounds something like this:

Thanks to digital traceability, researchers no longer need to choose between precision and scope in their observations: it is now possible to follow a multitude of interactions and, simultaneously, to distinguish the specific contribution that each one makes to the construction of social phenomena. Born in an era of scarcity, the social sciences are entering an age of abundance. In the face of the richness of these new data, nothing justifies keeping old distinctions. Endowed with a quantity of data comparable to the natural sciences, the social sciences can finally correct their lazy eyes and simultaneously maintain the focus and scope of their observations.

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.

Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Cockroach cyborgs use microphones to detect, trace sounds (Science Daily)

Date: November 6, 2014

Source: North Carolina State University

Summary: Researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster. Credit: Eric Whitmire.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

The researchers have also developed technology that can be used as an “invisible fence” to keep the biobots in the disaster area.

“In a collapsed building, sound is the best way to find survivors,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and senior author of two papers on the work.

The biobots are equipped with electronic backpacks that control the cockroach’s movements. Bozkurt’s research team has created two types of customized backpacks using microphones. One type of biobot has a single microphone that can capture relatively high-resolution sound from any direction to be wirelessly transmitted to first responders.

The second type of biobot is equipped with an array of three directional microphones to detect the direction of the sound. The research team has also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. The system worked well during laboratory testing. Video of a laboratory test of the microphone array system is available at

“The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter — like people calling for help — from sounds that don’t matter — like a leaking pipe,” Bozkurt says. “Once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.”

A research team led by Dr. Edgar Lobaton has previously shown that biobots can be used to map a disaster area. Funded by National Science Foundation CyberPhysical Systems Program, the long-term goal is for Bozkurt and Lobaton to merge their research efforts to both map disaster areas and pinpoint survivors. The researchers are already working with collaborator Dr. Mihail Sichitiu to develop the next generation of biobot networking and localization technology.

Bozkurt’s team also recently demonstrated technology that creates an invisible fence for keeping biobots in a defined area. This is significant because it can be used to keep biobots at a disaster site, and to keep the biobots within range of each other so that they can be used as a reliable mobile wireless network. This technology could also be used to steer biobots to light sources, so that the miniaturized solar panels on biobot backpacks can be recharged. Video of the invisible fence technology in practice can be seen at

A paper on the microphone sensor research, “Acoustic Sensors for Biobotic Search and Rescue,” was presented Nov. 5 at the IEEE Sensors 2014 conference in Valencia, Spain. Lead author of the paper is Eric Whitmire, a former undergraduate at NC State. The paper was co-authored by Tahmid Latif, a Ph.D. student at NC State, and Bozkurt.

The paper on the invisible fence for biobots, “Towards Fenceless Boundaries for Solar Powered Insect Biobots,” was presented Aug. 28 at the 36th Annual International IEEE EMBS Conference in Chicago, Illinois. Latif was the lead author. Co-authors include Tristan Novak, a graduate student at NC State, Whitmire and Bozkurt.

The research was supported by the National Science Foundation under grant number 1239243.

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds (Science Daily)

Date: October 29, 2014

Source: Massachusetts Institute of Technology

Summary: In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter. As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space.

A new spin on virtual reality helps engineers read robots’ minds. Credit: Video screenshot courtesy of Melanie Gonick/MIT

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.


Citizen science network produces accurate maps of atmospheric dust (Science Daily)

Date: October 27, 2014

Source: Leiden University

Summary: Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The research team analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

iSPEX map compiled from all iSPEX measurements performed in the Netherlands on July 8, 2013, between 14:00 and 21:00. Each blue dot represents one of the 6007 measurements that were submitted on that day. At each location on the map, the 50 nearest iSPEX measurements were averaged and converted to Aerosol Optical Thickness, a measure for the total amount of atmospheric particles. This map can be compared to the AOT data from the MODIS Aqua satellite, which flew over the Netherlands at 16:12 local time. The relatively high AOT values were caused by smoke clouds from forest fires in North America, which were blown over the Netherlands at an altitude of 2-4 km. In the course of the day, winds from the North brought clearer air to the northern provinces. Credit: Image courtesy of Leiden, Universiteit

Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The iSPEX team, led by Frans Snik of Leiden University, analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

The iSPEX maps achieve a spatial resolution as small as 2 kilometers whereas satellite data are much courser. They also fill in blind spots of established ground-based atmospheric measurement networks. The scientific article that presents these first results of the iSPEX project is being published in Geophysical Research Letters.

The iSPEX team developed a new atmospheric measurement method in the form of a low-cost add-on for smartphone cameras. The iSPEX app instructs participants to scan the blue sky while the phone’s built-in camera takes pictures through the add-on. The photos record both the spectrum and the linear polarization of the sunlight that is scattered by suspended dust particles, and thus contain information about the properties of these particles. While such properties are difficult to measure, much better knowledge on atmospheric particles is needed to understand their effects on health, climate and air traffic.

Thousands of participants performed iSPEX measurements throughout the Netherlands on three cloud-free days in 2013. This large-scale citizen science experiment allowed the iSPEX team to verify the reliability of this new measurement method.

After a rigorous quality assessment of each submitted data point, measurements recorded in specific areas within a limited amount of time are averaged to obtain sufficient accuracy. Subsequently the data are converted to Aerosol Optical Thickness (AOT), which is a standardized quantity related to the total amount of atmospheric particles. The iSPEX AOT data match comparable data from satellites and the AERONET ground station at Cabauw, the Netherlands. In areas with sufficiently high measurement densities, the iSPEX maps can even discern smaller details than satellite data.

Team leader Snik: “This proves that our new measurement method works. But the great strength of iSPEX is the measurement philosophy: the establishment of a citizen science network of thousands of enthusiastic volunteers who actively carry out outdoor measurements. In this way, we can collect valuable information about atmospheric particles on locations and/or at times that are not covered by professional equipment. These results are even more accurate than we had hoped, and give rise to further research and development. We are currently investigating to what extent we can extract more information about atmospheric particles from the iSPEX data, like their sizes and compositions. And of course, we want to organize many more measurement days.”

With the help of a grant that supports public activities in Europe during the International Year of Light 2015, the iSPEX team is now preparing for the international expansion of the project. This expansion provides opportunities for national and international parties to join the project. Snik: “Our final goal is to establish a global network of citizen scientists who all contribute measurements to study the sources and societal effects of polluting atmospheric particles.”

Journal Reference:

  1. Frans Snik, Jeroen H. H. Rietjens, Arnoud Apituley, Hester Volten, Bas Mijling, Antonio Di Noia, Stephanie Heikamp, Ritse C. Heinsbroek, Otto P. Hasekamp, J. Martijn Smit, Jan Vonk, Daphne M. Stam, Gerard van Harten, Jozua de Boer, Christoph U. Keller. Mapping atmospheric aerosols with a citizen science network of smartphone spectropolarimeters. Geophysical Research Letters, 2014; DOI: 10.1002/2014GL061462

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients (Science Daily)

Date: October 16, 2014

Source: University of Cambridge

Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Journal Reference:

  1. Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”