Arquivo da tag: Mediação tecnológica

Geoengineering Gone Wild: Newsweek Touts Turning Humans Into Hobbits To Save Climate (Climate Progress)


Matamata, New Zealand - "Hobbiton," site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings.

A Newsweek cover story touts genetically engineering humans to be smaller, with better night vision (like, say, hobbits) to save the Earth. Matamata, New Zealand, or “Hobbiton,” site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings. CREDIT: SHUTTERSTOCK

Newsweek has an entire cover story devoted to raising the question, “Can Geoengineering Save the Earth?” After reading it, though, you may not realize the answer is a resounding “no.” In part that’s because Newsweek manages to avoid quoting even one of the countless general critics of geoengineering in its 2700-word (!) piece.

20141205cover600-x-800Geoengineering is not a well-defined term, but at its broadest, it is the large-scale manipulation of the Earth and its biosphere to counteract the effects of human-caused global warming. Global warming itself is geo-engineering — originally unintentional, but now, after decades of scientific warnings, not so much.

I have likened geoengineering to a dangerous, never tested, course of chemotherapy prescribed to treat a condition curable through diet and exercise — or, in this case, greenhouse gas emissions reduction. If your actual doctor were to prescribe such a treatment, you would get another doctor.

The media likes geoengineering stories because they are clickbait involving all sorts of eye-popping science fiction (non)solutions to climate change that don’t actually require anything of their readers (or humanity) except infinite credulousness. And so Newsweek informs us that adorable ants might solve the problem or maybe phytoplankton can if given Popeye-like superstrength with a diet of iron or, as we’ll see, maybe we humans can, if we allow ourselves to be turned into hobbit-like creatures. The only thing they left out was time-travel.

The author does talk to an unusually sober expert supporter of geoengineering, climatologist Ken Caldeira. Caldeira knows that of all the proposed geoengineering strategies, only one makes even the tiniest bit of sense — and he knows even that one doesn’t make much sense. That would be the idea of spewing vast amounts of tiny particulates (sulfate aerosols) into the atmosphere to block sunlight, mimicking the global temperature drops that follow volcanic eruptions. But they note the caveat: “that said, Caldeira doesn’t believe any method of geoengineering is really a good solution to fighting climate change — we can’t test them on a large scale, and implementing them blindly could be dangerous.”

Actually, it’s worse than that. As Caldeira told me in 2009, “If we keep emitting greenhouse gases with the intent of offsetting the global warming with ever increasing loadings of particles in the stratosphere, we will be heading to a planet with extremely high greenhouse gases and a thick stratospheric haze that we would need to maintain more-or-less indefinitely. This seems to be a dystopic world out of a science fiction story.”

And the scientific literature has repeatedly explained that the aerosol-cooling strategy — or indeed any large-scale effort to manipulate sunlight — is very dangerous. Just last month, the UK Guardian reported that the aerosol strategy “risks ‘terrifying’ consequences including droughts and conflicts,” according to recent studies.

“Billions of people would suffer worse floods and droughts if technology was used to block warming sunlight, the research found.”

And remember, this dystopic world where billions suffer is the best geoengineering strategy out there. And it still does nothing to stop the catastrophic acidification of the ocean.

There simply is no rational or moral substitute for aggressive greenhouse gas cuts. But Newsweek quickly dispenses with that supposedly “seismic shift in what has become a global value system” so it can move on to its absurdist “reimagining of what it means to be human”:

In a paper released in 2012, S. Matthew Liao, a philosopher and ethicist at New York University, and some colleagues proposed a series of human-engineering projects that could make our very existence less damaging to the Earth. Among the proposals were a patch you can put on your skin that would make you averse to the flavor of meat (cattle farms are a notorious producer of the greenhouse gas methane), genetic engineering in utero to make humans grow shorter (smaller people means fewer resources used), technological reengineering of our eyeballs to make us better at seeing at night (better night vision means lower energy consumption)….

Yes, let’s turn humans into hobbits (who are “about 3 feet tall” and “their night vision is excellent“). Anyone can see that could easily be done for billions of people in the timeframe needed to matter. Who could imagine any political or practical objection?

Now you may be thinking that Newsweek can’t possibly be serious devoting ink to such nonsense. But if not, how did the last two paragraphs of the article make it to print:

Geoengineering, Liao argues, doesn’t address the root cause. Remaking the planet simply attempts to counteract the damage that’s been done, but it does nothing to stop the burden humans put on the planet. “Human engineering is more of an upstream solution,” says Liao. “You get right to the source. If we’re smaller on average, then we can have a smaller footprint on the planet. You’re looking at the source of the problem.”

It might be uncomfortable for humans to imagine intentionally getting smaller over generations or changing their physiology to become averse to meat, but why should seeding the sky with aerosols be any more acceptable? In the end, these are all actions we would enact only in worst-case scenarios. And when we’re facing the possible devastation of all mankind, perhaps a little humanity-wide night vision won’t seem so dramatic.

Memo to Newsweek: We are already facing the devastation of all mankind. And science has already provided the means of our “rescue,” the means of reducing “the burden humans put on the planet” — the myriad carbon-free energy technologies that reduce greenhouse gas emissions. Perhaps LED lighting would make a slightly more practical strategy than reengineering our eyeballs, though perhaps not one dramatic enough to inspire one of your cover stories.

As Caldeira himself has said elsewhere of geoengineering, “I think that 99% of our effort to avoid climate change should be put on emissions reduction, and 1% of our effort should be looking into these options.” So perhaps Newsweek will consider 99 articles on the real solutions before returning to the magical thinking of Middle Earth.

Cidade submarina projetada no Japão pode abrigar 5 mil moradores (Portal do Meio Ambiente)


Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Uma empresa de construção japonesa diz que, no futuro, os seres humanos podem viver em grandes complexos habitacionais submarinos.

Pelo projeto, cerca de 5 mil pessoas poderiam viver e trabalhar em modernas vesões da cidade perdida da Atlântida.

As construções teriam hotéis, espaços residenciais e conjuntos comerciais, informou o site Busines Insider.

A grande globo que flutua na superfície do mar, mas pode ser submerso em mau tempo, seria o centro de uma estrutura espiral gigantesca que mergulha a profundidades de até 4 mil metros.

A espiral formaria um caminho 15 quilômetros de um edifício até o fundo do oceano, o que poderia servir como uma fábrica para aproveitar recursos como metais e terras raras.

Os visionários da construtora Shimizu dizem que seria possível usar micro-organismos para converter dióxido de carbono capturado na superfície em metano.

Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Energia. O conceito foi desenvolvido em conjunto com várias organizações, incluindo a Universidade de Tóquio e a agência japonesa de ciência e tecnologia.

A grande diferença de temperaturas da água entre o topo e o fundo do mar poderia ser usada para gerar energia.

A construtora Shimizu diz que a cidade submarina custaria cerca de três trilhões de ienes (ou US$ 25 bilhões), e toda a tecnologia poderia estar disponível em 2030.

A empresa já projetou uma metrópole flutuante e um anel de energia solar ao redor da lua.

Fonte: Estadão.

Em site, indígenas ensinam sua história e derrubam preconceitos (Estadão)

Índio Educa publica material didático multimídia sobre histórias, tradições e lutas de povos do Brasil

Sempre que o índio xucuru Casé Angatu deixa Ilhéus, na Bahia, para oferecer em São Paulo um curso sobre culturas indígenas, ele ouve de algum participante: “Vocês comem pessoas?”. De tão acostumado a ser lembrado pelos estereótipos, Casé ri, disfarça e aproveita a oportunidade para apresentar ao grupo, na frente do Pátio do Colégio, o projeto Índio Educa. No site, indígenas de todo o Brasil produzem material didático multimídia sobre suas histórias, tradições e lutas.

Veja o texto na íntegra em:,em-site-indigenas-ensinam-sua-historia,1601271

(Estado S.Paulo)

Manifestações neozapatistas (Fapesp)


Para além das reivindicações contra os gastos públicos na organização da Copa do Mundo e por melhorias no transporte, na saúde e educação, as manifestações de junho de 2013 no Brasil ressaltaram uma expressão simbólica das articulações do chamado “net-ativismo”, expressão-chave de um estudo financiado pela FAPESP. No vídeo produzido pela equipe de Pesquisa FAPESP, o sociólogo Massimo Di Felice, do Centro de Pesquisa Atopos da Escola de Comunicações e Artes da Universidade de São Paulo (ECA-USP) e coordenador do estudo, fala sobre a qualidade e o lugar das ações net-ativistas e como as redes digitais e os novos dispositivos móveis de conectividade estão mudando práticas de participação social no Brasil e no mundo.

High-tech mirror beams heat away from buildings into space (Science Daily)


November 26, 2014


Stanford School of Engineering


Engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight, and it sends heat from inside the structure directly into space as infrared radiation.


Stanford engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight and sends heat from inside the structure directly into space as infrared radiation – represented by reddish rays. Credit: Illustration: Nicolle R. Fuller, Sayo-Art LLC

Stanford engineers have invented a revolutionary coating material that can help cool buildings, even on sunny days, by radiating heat away from the buildings and sending it directly into space.

A new ultrathin multilayered material can cool buildings without air conditioning by radiating warmth from inside the buildings into space while also reflecting sunlight to reduce incoming heat.

A team led by electrical engineering Professor Shanhui Fan and research associate Aaswath Raman reported this energy-saving breakthrough in the journal Nature.

The heart of the invention is an ultrathin, multilayered material that deals with light, both invisible and visible, in a new way.

Invisible light in the form of infrared radiation is one of the ways that all objects and living things throw off heat. When we stand in front of a closed oven without touching it, the heat we feel is infrared light. This invisible, heat-bearing light is what the Stanford invention shunts away from buildings and sends into space.

Of course, sunshine also warms buildings. The new material, in addition to dealing with infrared light, is also a stunningly efficient mirror that reflects virtually all of the incoming sunlight that strikes it.

The result is what the Stanford team calls photonic radiative cooling — a one-two punch that offloads infrared heat from within a building while also reflecting the sunlight that would otherwise warm it up. The result is cooler buildings that require less air conditioning.

“This is very novel and an extraordinarily simple idea,” said Eli Yablonovitch, a professor of engineering at the University of California, Berkeley, and a pioneer of photonics who directs the Center for Energy Efficient Electronics Science. “As a result of professor Fan’s work, we can now [use radiative cooling], not only at night but counter-intuitively in the daytime as well.”

The researchers say they designed the material to be cost-effective for large-scale deployment on building rooftops. Though it’s still a young technology, they believe it could one day reduce demand for electricity. As much as 15 percent of the energy used in buildings in the United States is spent powering air conditioning systems.

In practice the researchers think the coating might be sprayed on a more solid material to make it suitable for withstanding the elements.

“This team has shown how to passively cool structures by simply radiating heat into the cold darkness of space,” said Nobel Prize-winning physicist Burton Richter, professor emeritus at Stanford and former director of the research facility now called the SLAC National Accelerator Laboratory.

A warming world needs cooling technologies that don’t require power, according to Raman, lead author of the Nature paper. “Across the developing world, photonic radiative cooling makes off-grid cooling a possibility in rural regions, in addition to meeting skyrocketing demand for air conditioning in urban areas,” he said.

Using a window into space

The real breakthrough is how the Stanford material radiates heat away from buildings.

As science students know, heat can be transferred in three ways: conduction, convection and radiation. Conduction transfers heat by touch. That’s why you don’t touch a hot oven pan without wearing a mitt. Convection transfers heat by movement of fluids or air. It’s the warm rush of air when the oven is opened. Radiation transfers heat in the form of infrared light that emanates outward from objects, sight unseen.

The first part of the coating’s one-two punch radiates heat-bearing infrared light directly into space. The ultrathin coating was carefully constructed to send this infrared light away from buildings at the precise frequency that allows it to pass through the atmosphere without warming the air, a key feature given the dangers of global warming.

“Think about it like having a window into space,” Fan said.

Aiming the mirror

But transmitting heat into space is not enough on its own.

This multilayered coating also acts as a highly efficient mirror, preventing 97 percent of sunlight from striking the building and heating it up.

“We’ve created something that’s a radiator that also happens to be an excellent mirror,” Raman said.

Together, the radiation and reflection make the photonic radiative cooler nearly 9 degrees Fahrenheit cooler than the surrounding air during the day.

The multilayered material is just 1.8 microns thick, thinner than the thinnest aluminum foil.

It is made of seven layers of silicon dioxide and hafnium oxide on top of a thin layer of silver. These layers are not a uniform thickness, but are instead engineered to create a new material. Its internal structure is tuned to radiate infrared rays at a frequency that lets them pass into space without warming the air near the building.

“This photonic approach gives us the ability to finely tune both solar reflection and infrared thermal radiation,” said Linxiao Zhu, doctoral candidate in applied physics and a co-author of the paper.

“I am personally very excited about their results,” said Marin Soljacic, a physics professor at the Massachusetts Institute of Technology. “This is a great example of the power of nanophotonics.”

From prototype to building panel

Making photonic radiative cooling practical requires solving at least two technical problems.

The first is how to conduct the heat inside the building to this exterior coating. Once it gets there, the coating can direct the heat into space, but engineers must first figure out how to efficiently deliver the building heat to the coating.

The second problem is production. Right now the Stanford team’s prototype is the size of a personal pizza. Cooling buildings will require large panels. The researchers say large-area fabrication facilities can make their panels at the scales needed.

The cosmic fridge

More broadly, the team sees this project as a first step toward using the cold of space as a resource. In the same way that sunlight provides a renewable source of solar energy, the cold universe supplies a nearly unlimited expanse to dump heat.

“Every object that produces heat has to dump that heat into a heat sink,” Fan said. “What we’ve done is to create a way that should allow us to use the coldness of the universe as a heat sink during the day.”

In addition to Fan, Raman and Zhu, this paper has two additional co-authors: Marc Abou Anoma, a master’s student in mechanical engineering who has graduated; and Eden Rephaeli, a doctoral student in applied physics who has graduated.

This research was supported by the Advanced Research Project Agency-Energy (ARPA-E) of the U.S. Department of Energy.

Story Source:

The above story is based on materials provided by Stanford School of Engineering. The original article was written by Chris Cesare. Note: Materials may be edited for content and length.

Journal Reference:

  1. Aaswath P. Raman, Marc Abou Anoma, Linxiao Zhu, Eden Rephaeli, Shanhui Fan. Passive radiative cooling below ambient air temperature under direct sunlight. Nature, 2014; 515 (7528): 540 DOI: 10.1038/nature13883

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera


18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.

Worlding Anthropologies of Technosciences? (

October 28th, 2014, by

The past 4S meeting in Buenos Aires made visible the expansion of STS to various regions of the globe. Those of us who happened to be at the 4S meeting at University of Tokyo four years ago will remember the excitement of having the opportunity to work side-by-side with STS scholars from East and Southeast Asia. The same opportunity for worlding STS was opened again this past summer in Buenos Aires.

In order to help increase diversity of perspectives, Sharon Traweek and I organized a 4S panel on the relationships between STS and anthropology with a focus on the past, present, and future of the exchange among national traditions. The idea came out of our conversations about the intersections between science studies and the US anthropology of the late 1980’s with the work of CASTAC pioneers such as Diana Forsythe, Gary Downey, Joseph Dumit, David Hakken, David Hess, and Sharon Traweek, among several others who helped to establish the technosciences as legitimate domains of anthropological inquiry. It was not an easy battle, as Chris Furlow’s post on the history of CASTAC reminded us, but the results are undeniably all around us today. Panels on anthropology of science and technology can always be found at professional meetings. Publications on science and technology have space in various journals and the attention of university publishers these days.

For our panel this year we had the opening remarks of Gary Downey who, after reading our proposal aloud, emphasized the importance of advancing a cultural critique of science and technology through a situated, grounded stance. Quoting Marcus and Fischer’s “Anthropology as Cultural Critique” (1986) he emphasized that anthropology of science and technology could not dispense with the reflection upon the place, the situation, and the positioning of the anthropologist. Downey described his own positioning as an anthropologist and critical participant in engineering. Two decades ago Downey challenged the project of “anthropology as cultural critique” to speak widely to audiences outside anthropology and to practice anthropology as cultural critique, as suggested by the title of his early AAA paper, “Outside the Hotel”.

Yet “Anthropology as Cultural Critique” represented, he pointed out, one of the earliest reflexive calls in US anthropology for us to rethink canonical fieldwork orientations and our approach to the craft of ethnography with its representational politics. Downey and many others who invented new spaces to advance critical agendas in the context of science and technology did so by adding to the identity of the anthropologist other identities and responsibilities, such as that of former mechanical engineer, laboratory physicist, theologian, and experimenter of alternative forms of sociality, etc. These overlapping and intersecting identities opened up a whole field of possibilities for renewed modes of inquiry which, after “Anthropology as Cultural Critique”, consisted, as Downey suggested, in the juxtaposition of knowledge, forms of expertise, positionalities, and commitments. This is where we operate as STS scholars: at intersecting research areas, bridging “fault lines” (as Traweek’s felicitous expression puts it), and doing anthropology with and not without anthropologists.

The order of presentations for our panel was defined in a way to elicit contrasts and parallels between different modes of inquiry, grounded in different national anthropological traditions. The first session had Marko Monteiro (UNICAMP), Renzo Taddei (UNIFESP), Luis Felipe R. Murillo (UCLA), and Aalok Khandekar (Maastricht University) as presenters and Michael M. J. Fischer (MIT) as commentator. Marko Monteiro, an anthropologist working for an interdisciplinary program in science and technology policy in Brazil addressed questions of scientific modeling and State policy regarding the issue of deforestation in the Amazon. His paper presented the challenges of conducting multi-sited ethnography alongside multinational science collaborations, and described how scientific modeling for the Amazalert project was designed to accommodate natural and sociocultural differences with the goal of informing public policy. In the context of his ethnographic work, Monteiro soon found himself in a double position as a panelist expert and as an anthropologist interested in how different groups of scientists and policy makers negotiate the incorporation of “social life” through a “politics of associations.”

Similarly to Monteiro’s positioning, Khandekar benefited in his ethnographic work for being an active participant and serving as the organizer of expert panels involving STS scholars and scientists to design nanotechnology-based development programs in India. Drawing from Fischer’s notion of “third space”, Khandekar addressed how India could be framed productively as such for being a fertile ground for conceptual work where cross-disciplinary efforts have articulated humanities and technosciences under the rubric of innovation. Serving as a knowledge broker for an international collaboration involving India, Kenya, South Africa, and the Netherlands on nanotechnology, Khandekar had first-hand experience in promoting “third spaces” as postcolonial places for cross-disciplinary exchange through story telling.

Shifting the conversation to the context of computing and political action, Luis Felipe R. Murillo’s paper described a controversy surrounding the proposal of a “feminist programming language” and discussed the ways in which it provides access to the contemporary technopolitical dynamics of computing. The feminist programming language parody served as an entry point to analyze how language ideologies render symbolic boundaries visible, highlighting fundamental aspects of socialization in the context of computing in order to reproduce concepts and notions of the possible, logical, and desirable technical solutions. In respect to socioeconomic and political divisions, he suggested that feminist approaches in their intersectionality became highly controversial for addressing publicly systemic inequalities that are transversal to the context of computing and characterize a South that is imbricated in the North of “big computing” (an apparatus that encompasses computer science, information technology industries, infrastructures, and cultures with their reinvented peripheries within the global North and South).

Renzo Taddei recasted the debate regarding belief in magic drawing from a long lasting thread of anthropological research on logical reasoning and cultural specificity. Taddei opened up his take on our conversation with the assertion that to conduct ethnography on witchcraft assuming that it does not exist is fundamentally ethnocentric. This observation was meant to take us the core of his concerns regarding climate sciences vis-à-vis traditional Brazilian forms of forecasting from Sertão, a semi-arid and extremely impoverished area of the Northeast of Brazil. He then proceeded to discuss magical manipulation of the atmosphere from native and Afro-Brazilian perspectives in Brazil.

For the second day of our panel, we had papers by Kim Fortun (RPI), Mike Fortun (RPI), Sharon Traweek (UCLA) and the commentary of Claudia Fonseca (UFRGS) whose long-term contributions to study of adoption, popular culture, science and human rights in Brazil has been highly influential. In her paper, Kim Fortun addressed the double bind of expertise, the in-between of competence and hubris, structural risk and unpredictability of the very infrastructures experts are called upon to take responsibility. Fortun’s call was for a mode of interaction and engagement among science and humanities scholars oriented toward friendship and hospitality as well as commitment for our technoscientific futures under the aegis of late industrialism. “Ethnographic insight”, according to Fortun, “can loop back into the world” through the means of creative pedagogies which are attentive to the fact that science practitioners and STS scholars mobilize different analytic lenses while speaking through and negotiating with distinct discursive registers in the context of international collaborations. Our assumptions of what is conceptually shared should not anticipate what is to be seen or forged in the context of our international exchange, since what is foregrounded in discourse always implicates one form or another of erasure. The image Fortun suggested for us to think with is not that of a network, but that of a kaleidoscope in which the complexity of disasters can be seen across multiple dimensions and scales in their imbrication at every turn.

In his presentation, Michael Fortun questioned the so-called “ontological turn” to recast the “hauntological” dimensions of our research practices vis-à-vis those of our colleagues in the biosciences, that is, to account for the imponderables of scientific and anthropological languages and practices through the lens of a poststructural understanding of the historical functioning of language. In his study of asthma, Fortun attends to multiple perspectives and experiences with asthma across national, socioeconomic, scientific and technical scales. In the context of his project “The Asthma Files”, he suggests, alongside Kim Fortun, hospitality and friendship as frames for engaging instead of disciplining the contingency of ethnographic encounters and ethnographic projects. For future collaborations, two directions are suggested: 1) investigating and experimenting with modes of care and 2) designing collaborative digital platforms for experimental ethnography. The former is related to the scientists care for their instruments, methods, theories, intellectual reproduction, infrastructures, and problems in their particular research fields, while the latter poses the question of care among ourselves and the construction of digital platforms to facilitate and foster collaboration in anthropology.

This panel was closed with Sharon Traweek’s paper on multi-scalar complexity of contemporary scientific collaborations, based on her current research on data practices and gender imbalance in astronomy. Drawing from concepts of meshwork and excess proposed by researchers with distinct intellectual projects such as Jennifer McWeeny, Arturo Escobar, Susan Paulson, and Tim Ingold, Traweek discussed billion-dollar science projects which involve multiple research communities clustered around a few recent research devices and facilities, such as the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the International Thermonuclear Experimental Reactor (ITER) in France. In the space of ongoing transformations of big science toward partially-global science, women and ethnic minorities are building meshworks as overlapping networks in their attempts to build careers in astronomy. Traweek proposed a revision of the notion of “enrollment” to account for the ways in which mega projects in science are sustained for decades of planning, development, construction, and operation at excessive scales which require more than support and consensus. Mega projects in the technosciences are, in Traweek’s terms, “over-determined collages that get built and used” by international teams with “glocal” structures of governance and funding.

In his concluding remarks Michael M. J. Fischer addressed the relationship between anthropology and STS through three organizing axes: time, topic, and audiences. As a question of time, a quarter century has passed for the shared history of STS and anthropology and probing questions have been asked and explored in the technosciences in respect to its apparatuses, codes, languages, life cycle of machines, educational curricula, personal and technical trajectories, which is well represented in one of the foundational texts of our field, Traweek’s “Beamtimes and Lifetimes” (1988). Traweek has helped establish a distinctive anthropological style “working alongside scientists and engineers through juxtaposition not against them.” In respect to the relationships between anthropology and STS, Fischer raised the question of pedagogies as, at once, a prominent form of engagement in the technosciences as well as an anthropological mode of engagement with the technosciences. The common thread connecting all the panel contributions was the potential for new pedagogies to emerge with the contribution of world anthropologies of sciences and technologies. That is, in the space of socialization of scientists, engineers, and the public, space of the convention, as well as invention, and knowledge-making, all the presenters addressed the question of how to advance an anthropology of science and technology with forms of participation, as Fischer suggests, as productive critique.

Along similar lines, Claudia Fonseca offered closing remarks about her own trajectory and the persistence of national anthropological traditions informing our cross-dialogs and border crossings. Known in Brazil as an “anthropologist with an accent”, an anthropologist born in the US, trained in France, and based in Brazil for the most part of her academic life, she cannot help but emphasize the style and forms of engagement that are specific to Brazilian anthropology which has a tradition of conducting ethnography at home. The panel served, in sum, for the participants to find a common thread connecting a rather disparate set of papers and for advancing a form of dialogue across national traditions and modes of engagement which is attentive to local political histories and (national) anthropological trajectories. As suggested by Michael Fortun, we are just collectively conjuring – with much more empiria than magic – a new beginning in the experimental tradition for world anthropologies of sciences and technologies.

Latour on digital methods (Installing [social] order)


In a fascinating, apparently not-peer-reviewed non-article available free online here, Tommaso Venturini and Bruno Latour discuss the potential of “digital methods” for the contemporary social sciences.

The paper summarizes, and quite nicely, the split of sociological methods to the statistical aggregate using quantitative methods (capturing supposedly macro-phenomenon) and irreducibly basic interactions using qualitative methods (capturing supposedly micro-phenomenon). The problem is that neither of which aided the sociologist in capture emergent phenomenon, that is, capturing controversies and events as they happen rather than estimate them after they have emerged (quantitative macro structures) or capture them divorced from non-local influences (qualitative micro phenomenon).

The solution, they claim, is to adopt digital methods in the social sciences. The paper is not exactly a methodological outline of how to accomplish these methods, but there is something of a justification available for it, and it sounds something like this:

Thanks to digital traceability, researchers no longer need to choose between precision and scope in their observations: it is now possible to follow a multitude of interactions and, simultaneously, to distinguish the specific contribution that each one makes to the construction of social phenomena. Born in an era of scarcity, the social sciences are entering an age of abundance. In the face of the richness of these new data, nothing justifies keeping old distinctions. Endowed with a quantity of data comparable to the natural sciences, the social sciences can finally correct their lazy eyes and simultaneously maintain the focus and scope of their observations.

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.

Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Cockroach cyborgs use microphones to detect, trace sounds (Science Daily)

Date: November 6, 2014

Source: North Carolina State University

Summary: Researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster. Credit: Eric Whitmire.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

The researchers have also developed technology that can be used as an “invisible fence” to keep the biobots in the disaster area.

“In a collapsed building, sound is the best way to find survivors,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and senior author of two papers on the work.

The biobots are equipped with electronic backpacks that control the cockroach’s movements. Bozkurt’s research team has created two types of customized backpacks using microphones. One type of biobot has a single microphone that can capture relatively high-resolution sound from any direction to be wirelessly transmitted to first responders.

The second type of biobot is equipped with an array of three directional microphones to detect the direction of the sound. The research team has also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. The system worked well during laboratory testing. Video of a laboratory test of the microphone array system is available at

“The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter — like people calling for help — from sounds that don’t matter — like a leaking pipe,” Bozkurt says. “Once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.”

A research team led by Dr. Edgar Lobaton has previously shown that biobots can be used to map a disaster area. Funded by National Science Foundation CyberPhysical Systems Program, the long-term goal is for Bozkurt and Lobaton to merge their research efforts to both map disaster areas and pinpoint survivors. The researchers are already working with collaborator Dr. Mihail Sichitiu to develop the next generation of biobot networking and localization technology.

Bozkurt’s team also recently demonstrated technology that creates an invisible fence for keeping biobots in a defined area. This is significant because it can be used to keep biobots at a disaster site, and to keep the biobots within range of each other so that they can be used as a reliable mobile wireless network. This technology could also be used to steer biobots to light sources, so that the miniaturized solar panels on biobot backpacks can be recharged. Video of the invisible fence technology in practice can be seen at

A paper on the microphone sensor research, “Acoustic Sensors for Biobotic Search and Rescue,” was presented Nov. 5 at the IEEE Sensors 2014 conference in Valencia, Spain. Lead author of the paper is Eric Whitmire, a former undergraduate at NC State. The paper was co-authored by Tahmid Latif, a Ph.D. student at NC State, and Bozkurt.

The paper on the invisible fence for biobots, “Towards Fenceless Boundaries for Solar Powered Insect Biobots,” was presented Aug. 28 at the 36th Annual International IEEE EMBS Conference in Chicago, Illinois. Latif was the lead author. Co-authors include Tristan Novak, a graduate student at NC State, Whitmire and Bozkurt.

The research was supported by the National Science Foundation under grant number 1239243.

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds (Science Daily)

Date: October 29, 2014

Source: Massachusetts Institute of Technology

Summary: In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter. As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space.

A new spin on virtual reality helps engineers read robots’ minds. Credit: Video screenshot courtesy of Melanie Gonick/MIT

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.


Citizen science network produces accurate maps of atmospheric dust (Science Daily)

Date: October 27, 2014

Source: Leiden University

Summary: Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The research team analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

iSPEX map compiled from all iSPEX measurements performed in the Netherlands on July 8, 2013, between 14:00 and 21:00. Each blue dot represents one of the 6007 measurements that were submitted on that day. At each location on the map, the 50 nearest iSPEX measurements were averaged and converted to Aerosol Optical Thickness, a measure for the total amount of atmospheric particles. This map can be compared to the AOT data from the MODIS Aqua satellite, which flew over the Netherlands at 16:12 local time. The relatively high AOT values were caused by smoke clouds from forest fires in North America, which were blown over the Netherlands at an altitude of 2-4 km. In the course of the day, winds from the North brought clearer air to the northern provinces. Credit: Image courtesy of Leiden, Universiteit

Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The iSPEX team, led by Frans Snik of Leiden University, analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

The iSPEX maps achieve a spatial resolution as small as 2 kilometers whereas satellite data are much courser. They also fill in blind spots of established ground-based atmospheric measurement networks. The scientific article that presents these first results of the iSPEX project is being published in Geophysical Research Letters.

The iSPEX team developed a new atmospheric measurement method in the form of a low-cost add-on for smartphone cameras. The iSPEX app instructs participants to scan the blue sky while the phone’s built-in camera takes pictures through the add-on. The photos record both the spectrum and the linear polarization of the sunlight that is scattered by suspended dust particles, and thus contain information about the properties of these particles. While such properties are difficult to measure, much better knowledge on atmospheric particles is needed to understand their effects on health, climate and air traffic.

Thousands of participants performed iSPEX measurements throughout the Netherlands on three cloud-free days in 2013. This large-scale citizen science experiment allowed the iSPEX team to verify the reliability of this new measurement method.

After a rigorous quality assessment of each submitted data point, measurements recorded in specific areas within a limited amount of time are averaged to obtain sufficient accuracy. Subsequently the data are converted to Aerosol Optical Thickness (AOT), which is a standardized quantity related to the total amount of atmospheric particles. The iSPEX AOT data match comparable data from satellites and the AERONET ground station at Cabauw, the Netherlands. In areas with sufficiently high measurement densities, the iSPEX maps can even discern smaller details than satellite data.

Team leader Snik: “This proves that our new measurement method works. But the great strength of iSPEX is the measurement philosophy: the establishment of a citizen science network of thousands of enthusiastic volunteers who actively carry out outdoor measurements. In this way, we can collect valuable information about atmospheric particles on locations and/or at times that are not covered by professional equipment. These results are even more accurate than we had hoped, and give rise to further research and development. We are currently investigating to what extent we can extract more information about atmospheric particles from the iSPEX data, like their sizes and compositions. And of course, we want to organize many more measurement days.”

With the help of a grant that supports public activities in Europe during the International Year of Light 2015, the iSPEX team is now preparing for the international expansion of the project. This expansion provides opportunities for national and international parties to join the project. Snik: “Our final goal is to establish a global network of citizen scientists who all contribute measurements to study the sources and societal effects of polluting atmospheric particles.”

Journal Reference:

  1. Frans Snik, Jeroen H. H. Rietjens, Arnoud Apituley, Hester Volten, Bas Mijling, Antonio Di Noia, Stephanie Heikamp, Ritse C. Heinsbroek, Otto P. Hasekamp, J. Martijn Smit, Jan Vonk, Daphne M. Stam, Gerard van Harten, Jozua de Boer, Christoph U. Keller. Mapping atmospheric aerosols with a citizen science network of smartphone spectropolarimeters. Geophysical Research Letters, 2014; DOI: 10.1002/2014GL061462

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients (Science Daily)

Date: October 16, 2014

Source: University of Cambridge

Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Journal Reference:

  1. Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

The New Abolitionism (The Nation)

Lectures Aren’t Just Boring, They’re Ineffective, Too, Study Finds (Science)

12 May 2014 3:00 pm

Blah? Traditional lecture classes have higher undergraduate failure rates than those using active learning techniques, new research finds.

Wikimedia. Blah? Traditional lecture classes have higher undergraduate failure rates than those using active learning techniques, new research finds.

Are your lectures droning on? Change it up every 10 minutes with more active teaching techniques and more students will succeed, researchers say. A new study finds that undergraduate students in classes with traditional stand-and-deliver lectures are 1.5 times more likely to fail than students in classes that use more stimulating, so-called active learning methods.

“Universities were founded in Western Europe in 1050 and lecturing has been the predominant form of teaching ever since,” says biologist Scott Freeman of the University of Washington, Seattle. But many scholars have challenged the “sage on a stage” approach to teaching science, technology, engineering, and math (STEM) courses, arguing that engaging students with questions or group activities is more effective.

To weigh the evidence, Freeman and a group of colleagues analyzed 225 studies of undergraduate STEM teaching methods. The meta-analysis, published online today in theProceedings of the National Academy of Sciences, concluded that teaching approaches that turned students into active participants rather than passive listeners reduced failure rates and boosted scores on exams by almost one-half a standard deviation. “The change in the failure rates is whopping,” Freeman says. And the exam improvement—about 6%—could, for example, “bump [a student’s] grades from a B– to a B.”

“This is a really important article—the impression I get is that it’s almost unethical to be lecturing if you have this data,” says Eric Mazur, a physicist at Harvard University who has campaigned against stale lecturing techniques for 27 years and was not involved in the work. “It’s good to see such a cohesive picture emerge from their meta-analysis—an abundance of proof that lecturing is outmoded, outdated, and inefficient.”

Although there is no single definition of active learning approaches, they include asking students to answer questions by using handheld clickers, calling on individuals or groups randomly, or having students clarify concepts to each other and reach a consensus on an issue.

Freeman says he’s started using such techniques even in large classes. “My introductory biology course has gotten up to 700 students,” he says. “For the ultimate class session—I don’t say lecture—I’m showing PowerPoint slides, but everything is a question and I use clickers and random calling. Somebody droning on for 15 minutes at a time and then doing cookbook labs isn’t interesting.” Freeman estimates that scaling up such active learning approaches could enable success for tens of thousands of students who might otherwise drop or fail STEM courses.

Despite its advantages, active learning isn’t likely to completely kill the lecture, says Noah Finkelstein, a physics professor who directs the Center for STEM Learning at the University of Colorado, Boulder, and was not involved in the study. The new study “is consistent with what the benefits of active learning are showing us,” he says. “But I don’t think there should be a monolithic stance about lecture or no lecture. There are still times when lectures will be needed, but the traditional mode of stand-and-deliver is being demonstrated as less effective at promoting student learning and preparing future teachers.”

The current study didn’t directly address the effectiveness of one new twist in the traditional lecturing format: massive open online courses that can beam talks to thousands or even millions of students. But Freeman says the U.S. Department of Education has conducted its own meta-analysis of distance learning, and it found there was no difference in being lectured at in a classroom versus through a computer screen at home. So, Freeman says: “If you’re going to get lectured at, you might as well be at home in bunny slippers.”

The Change Within: The Obstacles We Face Are Not Just External (The Nation)

The climate crisis has such bad timing, confronting it not only requires a new economy but a new way of thinking.

Naomi Klein

April 21, 2014

(Reuters/China Daily)

This is a story about bad timing.

One of the most disturbing ways that climate change is already playing out is through what ecologists call “mismatch” or “mistiming.” This is the process whereby warming causes animals to fall out of step with a critical food source, particularly at breeding times, when a failure to find enough food can lead to rapid population losses.

The migration patterns of many songbird species, for instance, have evolved over millennia so that eggs hatch precisely when food sources such as caterpillars are at their most abundant, providing parents with ample nourishment for their hungry young. But because spring now often arrives early, the caterpillars are hatching earlier too, which means that in some areas they are less plentiful when the chicks hatch, threatening a number of health and fertility impacts. Similarly, in West Greenland, caribou are arriving at their calving grounds only to find themselves out of sync with the forage plants they have relied on for thousands of years, now growing earlier thanks to rising temperatures. That is leaving female caribou with less energy for lactation, reproduction and feeding their young, a mismatch that has been linked to sharp decreases in calf births and survival rates.

Scientists are studying cases of climate-related mistiming among dozens of species, from Arctic terns to pied flycatchers. But there is one important species they are missing—us. Homosapiens. We too are suffering from a terrible case of climate-related mistiming, albeit in a cultural-historical, rather than a biological, sense. Our problem is that the climate crisis hatched in our laps at a moment in history when political and social conditions were uniquely hostile to a problem of this nature and magnitude—that moment being the tail end of the go-go ’80s, the blastoff point for the crusade to spread deregulated capitalism around the world. Climate changeis a collective problem demanding collective action the likes of which humanity has never actually accomplished. Yet it entered mainstream consciousness in the midst of an ideological war being waged on the very idea of the collective sphere.

This deeply unfortunate mistiming has created all sorts of barriers to our ability to respond effectively to this crisis. It has meant that corporate power was ascendant at the very moment when we needed to exert unprecedented controls over corporate behavior in order to protect life on earth. It has meant that regulation was a dirty word just when we needed those powers most. It has meant that we are ruled by a class of politicians who know only how to dismantle and starve public institutions, just when they most need to be fortified and reimagined. And it has meant that we are saddled with an apparatus of “free trade” deals that tie the hands of policy-makers just when they need maximum flexibility to achieve a massive energy transition.

Confronting these various structural barriers to the next economy is the critical work of any serious climate movement. But it’s not the only task at hand. We also have to confront how the mismatch between climate change and market domination has created barriers within our very selves, making it harder to look at this most pressing of humanitarian crises with anything more than furtive, terrified glances. Because of the way our daily lives have been altered by both market and technological triumphalism, we lack many of the observational tools necessary to convince ourselves that climate change is real—let alone the confidence to believe that a different way of living is possible.

And little wonder: just when we needed to gather, our public sphere was disintegrating; just when we needed to consume less, consumerism took over virtually every aspect of our lives; just when we needed to slow down and notice, we sped up; and just when we needed longer time horizons, we were able to see only the immediate present.

This is our climate change mismatch, and it affects not just our species, but potentially every other species on the planet as well.

The good news is that, unlike reindeer and songbirds, we humans are blessed with the capacity for advanced reasoning and therefore the ability to adapt more deliberately—to change old patterns of behavior with remarkable speed. If the ideas that rule our culture are stopping us from saving ourselves, then it is within our power to change those ideas. But before that can happen, we first need to understand the nature of our personal climate mismatch.

› Climate change demands that we consume less, but being consumers is all we know.Climate change is not a problem that can be solved simply by changing what we buy—a hybrid instead of an SUV, some carbon offsets when we get on a plane. At its core, it is a crisis born of overconsumption by the comparatively wealthy, which means the world’s most manic consumers are going to have to consume less.

The problem is not “human nature,” as we are so often told. We weren’t born having to shop this much, and we have, in our recent past, been just as happy (in many cases happier) consuming far less. The problem is the inflated role that consumption has come to play in our particular era.

Late capitalism teaches us to create ourselves through our consumer choices: shopping is how we form our identities, find community and express ourselves. Thus, telling people that they can’t shop as much as they want to because the planet’s support systems are overburdened can be understood as a kind of attack, akin to telling them that they cannot truly be themselves. This is likely why, of the original “Three Rs”—reduce, reuse, recycle—only the third has ever gotten any traction, since it allows us to keep on shopping as long as we put the refuse in the right box. The other two, which require that we consume less, were pretty much dead on arrival.

› Climate change is slow, and we are fast. When you are racing through a rural landscape on a bullet train, it looks as if everything you are passing is standing still: people, tractors, cars on country roads. They aren’t, of course. They are moving, but at a speed so slow compared with the train that they appear static.

So it is with climate change. Our culture, powered by fossil fuels, is that bullet train, hurtling forward toward the next quarterly report, the next election cycle, the next bit of diversion or piece of personal validation via our smartphones and tablets. Our changing climate is like the landscape out the window: from our racy vantage point, it can appear static, but it is moving, its slow progress measured in receding ice sheets, swelling waters and incremental temperature rises. If left unchecked, climate change will most certainly speed up enough to capture our fractured attention—island nations wiped off the map, and city-drowning superstorms, tend to do that. But by then, it may be too late for our actions to make a difference, because the era of tipping points will likely have begun.

› Climate change is place-based, and we are everywhere at once. The problem is not just that we are moving too quickly. It is also that the terrain on which the changes are taking place is intensely local: an early blooming of a particular flower, an unusually thin layer of ice on a lake, the late arrival of a migratory bird. Noticing those kinds of subtle changes requires an intimate connection to a specific ecosystem. That kind of communion happens only when we know a place deeply, not just as scenery but also as sustenance, and when local knowledge is passed on with a sense of sacred trust from one generation to the next.

But that is increasingly rare in the urbanized, industrialized world. We tend to abandon our homes lightly—for a new job, a new school, a new love. And as we do so, we are severed from whatever knowledge of place we managed to accumulate at the previous stop, as well as from the knowledge amassed by our ancestors (who, at least in my case, migrated repeatedly themselves).

Even for those of us who manage to stay put, our daily existence can be disconnected from the physical places where we live. Shielded from the elements as we are in our climate-controlled homes, workplaces and cars, the changes unfolding in the natural world easily pass us by. We might have no idea that a historic drought is destroying the crops on the farms that surround our urban homes, since the supermarkets still display miniature mountains of imported produce, with more coming in by truck all day. It takes something huge—like a hurricane that passes all previous high-water marks, or a flood destroying thousands of homes—for us to notice that something is truly amiss. And even then we have trouble holding on to that knowledge for long, since we are quickly ushered along to the next crisis before these truths have a chance to sink in.

Climate change, meanwhile, is busily adding to the ranks of the rootless every day, as natural disasters, failed crops, starving livestock and climate-fueled ethnic conflicts force yet more people to leave their ancestral homes. And with every human migration, more crucial connections to specific places are lost, leaving yet fewer people to listen closely to the land.

› Climate pollutants are invisible, and we have stopped believing in what we cannot see.When BP’s Macondo well ruptured in 2010, releasing torrents of oil into the Gulf of Mexico, one of the things we heard from company CEO Tony Hayward was that “the Gulf of Mexico is a very big ocean. The amount of volume of oil and dispersant we are putting into it is tiny in relation to the total water volume.” The statement was widely ridiculed at the time, and rightly so, but Hayward was merely voicing one of our culture’s most cherished beliefs: that what we can’t see won’t hurt us and, indeed, barely exists.

So much of our economy relies on the assumption that there is always an “away” into which we can throw our waste. There’s the away where our garbage goes when it is taken from the curb, and the away where our waste goes when it is flushed down the drain. There’s the away where the minerals and metals that make up our goods are extracted, and the away where those raw materials are turned into finished products. But the lesson of the BP spill, in the words of ecological theorist Timothy Morton, is that ours is “a world in which there is no ‘away.’”

When I published No Logo a decade and a half ago, readers were shocked to learn of the abusive conditions under which their clothing and gadgets were manufactured. But we have since learned to live with it—not to condone it, exactly, but to be in a state of constant forgetfulness. Ours is an economy of ghosts, of deliberate blindness.

Air is the ultimate unseen, and the greenhouse gases that warm it are our most elusive ghosts. Philosopher David Abram points out that for most of human history, it was precisely this unseen quality that gave the air its power and commanded our respect. “Called Sila, the wind-mind of the world, by the Inuit; Nilch’i, or Holy Wind, by the Navajo; Ruach, or rushing-spirit, by the ancient Hebrews,” the atmosphere was “the most mysterious and sacred dimension of life.” But in our time, “we rarely acknowledge the atmosphere as it swirls between two persons.” Having forgotten the air, Abram writes, we have made it our sewer, “the perfect dump site for the unwanted by-products of our industries…. Even the most opaque, acrid smoke billowing out of the pipes will dissipate and disperse, always and ultimately dissolving into the invisible. It’s gone. Out of sight, out of mind.”

* * *

Another part of what makes climate change so very difficult for us to grasp is that ours is a culture of the perpetual present, one that deliberately severs itself from the past that created us as well as the future we are shaping with our actions. Climate change is about how what we did generations in the past will inescapably affect not just the present, but generations in the future. These time frames are a language that has become foreign to most of us.

This is not about passing individual judgment, nor about berating ourselves for our shallowness or rootlessness. Rather, it is about recognizing that we are products of an industrial project, one intimately, historically linked to fossil fuels.

And just as we have changed before, we can change again. After listening to the great farmer-poet Wendell Berry deliver a lecture on how we each have a duty to love our “homeplace” more than any other, I asked him if he had any advice for rootless people like me and my friends, who live in our computers and always seem to be shopping for a home. “Stop somewhere,” he replied. “And begin the thousand-year-long process of knowing that place.”

That’s good advice on lots of levels. Because in order to win this fight of our lives, we all need a place to stand.

Read more of The Nation’s special #MyClimateToo coverage:

Mark Hertsgaard: Why Today Is All About Climate
Christopher Hayes: The New Abolitionism
Dani McClain: The ‘Environmentalists’ Who Scapegoat Immigrants and Women on Climate Change
Mychal Denzel Smith: Racial and Environmental Justice Are Two Sides of the Same Coin
Katrina vanden Heuvel: Earth Day’s Founding Father
Wen Stephenson: Let This Earth Day Be The Last
Katha Pollitt: Climate Change is the Tragedy of the Global Commons
Michelle Goldberg: Fighting Despair to Fight Climate Change
George Zornick: We’re the Fossil Fuel Industry’s Cheap Date
Dan Zegart: Want to Stop Climate Change? Take the Fossil Fuel Industry to Court
Jeremy Brecher: ‘Jobs vs. the Environment’: How to Counter the Divisive Big Lie
Jon Wiener: Elizabeth Kolbert on Species Extinction and Climate Change
Dave Zirin: Brazil’s World Cup Will Kick the Environment in the Teeth
Steven Hsieh: People of Color Are Already Getting Hit the Hardest by Climate Change
John Nichols: If Rick Weiland Can Say “No” to Keystone, So Can Barack Obama
Michelle Chen: Where Have All the Green Jobs Gone?
Peter Rothberg: Why I’m Not Totally Bummed Out This Earth Day
Leslie Savan: This Is My Brain on Paper Towels

‘Dressed’ laser aimed at clouds may be key to inducing rain, lightning (Science Daily)

Date: April 18, 2014

Source: University of Central Florida

Summary: The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup.

The adage “Everyone complains about the weather but nobody does anything about it,” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Credit: © Maksim Shebeko / Fotolia

The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning.

The solution? Surround the beam with a second beam to act as an energy reservoir, sustaining the central beam to greater distances than previously possible. The secondary “dress” beam refuels and helps prevent the dissipation of the high-intensity primary beam, which on its own would break down quickly. A report on the project, “Externally refueled optical filaments,” was recently published in Nature Photonics.

Water condensation and lightning activity in clouds are linked to large amounts of static charged particles. Stimulating those particles with the right kind of laser holds the key to possibly one day summoning a shower when and where it is needed.

Lasers can already travel great distances but “when a laser beam becomes intense enough, it behaves differently than usual — it collapses inward on itself,” said Matthew Mills, a graduate student in the Center for Research and Education in Optics and Lasers (CREOL). “The collapse becomes so intense that electrons in the air’s oxygen and nitrogen are ripped off creating plasma — basically a soup of electrons.”

At that point, the plasma immediately tries to spread the beam back out, causing a struggle between the spreading and collapsing of an ultra-short laser pulse. This struggle is called filamentation, and creates a filament or “light string” that only propagates for a while until the properties of air make the beam disperse.

“Because a filament creates excited electrons in its wake as it moves, it artificially seeds the conditions necessary for rain and lightning to occur,” Mills said. Other researchers have caused “electrical events” in clouds, but not lightning strikes.

But how do you get close enough to direct the beam into the cloud without being blasted to smithereens by lightning?

“What would be nice is to have a sneaky way which allows us to produce an arbitrary long ‘filament extension cable.’ It turns out that if you wrap a large, low intensity, doughnut-like ‘dress’ beam around the filament and slowly move it inward, you can provide this arbitrary extension,” Mills said. “Since we have control over the length of a filament with our method, one could seed the conditions needed for a rainstorm from afar. Ultimately, you could artificially control the rain and lightning over a large expanse with such ideas.”

So far, Mills and fellow graduate student Ali Miri have been able to extend the pulse from 10 inches to about 7 feet. And they’re working to extend the filament even farther.

“This work could ultimately lead to ultra-long optically induced filaments or plasma channels that are otherwise impossible to establish under normal conditions,” said professor Demetrios Christodoulides, who is working with the graduate students on the project.

“In principle such dressed filaments could propagate for more than 50 meters or so, thus enabling a number of applications. This family of optical filaments may one day be used to selectively guide microwave signals along very long plasma channels, perhaps for hundreds of meters.”

Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup. Development of the technology was supported by a $7.5 million grant from the Department of Defense.

Journal Reference:

  1. Maik Scheller, Matthew S. Mills, Mohammad-Ali Miri, Weibo Cheng, Jerome V. Moloney, Miroslav Kolesik, Pavel Polynkin, Demetrios N. Christodoulides.Externally refuelled optical filamentsNature Photonics, 2014; 8 (4): 297 DOI:10.1038/nphoton.2014.47

Repercussões do novo relatório do Painel Intergovernamental sobre Mudanças Climáticas (IPCC)

Brasil já se prepara para adaptações às mudanças climáticas, diz especialista (Agência Brasil)

JC e-mail 4925, de 02 de abril de 2014

Com base no relatório do IPCC,dirigente do INPE disse que o Brasil já revela um passo adiante em termos de adaptação às mudanças climáticas

Com o título Mudanças Climáticas 2014: Impactos, Adaptação e Vulnerabilidade, o relatório divulgado ontem (31) pelo Painel Intergovernamental sobre Mudanças Climáticas (IPCC) sinaliza que os efeitos das mudanças do clima já estão sendo sentidos em todo o mundo. O relatório aponta que para se alcançar um aquecimento de apenas 2 graus centígrados, que seria o mínimo tolerável para que os impactos não sejam muito fortes, é preciso ter emissões zero de gases do efeito estufa, a partir de 2050.

“O compromisso é ter emissões zero a partir de 2040 /2050, e isso significa uma mudança de todo o sistema de desenvolvimento, que envolve mudança dos combustíveis”, disse hoje (1º) o chefe do Centro de Ciência do Sistema Terrestr,e do Instituto Nacional de Pesquisas Espaciais (Inpe), José Marengo, um dos autores do novo relatório do IPCC. Marengo apresentou o relatório na Academia Brasileira de Ciências (ABC), no Rio de Janeiro, e destacou que alguns países interpretam isso como uma tentativa de frear o crescimento econômico. Na verdade, ele assegurou que a intenção é chegar a um valor para que o aquecimento não seja tão intenso e grave.

Com base no relatório do IPCC, Marengo comentou que o Brasil já revela um passo adiante em termos de adaptação às mudanças climáticas. “Eu acho que o Brasil já escutou a mensagem. Já está começando a preparar o plano nacional de adaptação, por meio dos ministérios do Meio Ambiente e da Ciência, Tecnologia e Inovação”. Essa adaptação, acrescentou, é acompanhada de avaliações de vulnerabilidades, “e o Brasil é vulnerável às mudanças de clima”, lembrou.

A adaptação, segundo ele, atenderá a políticas governamentais, mas a comunidade científica ajudará a elaborar o plano para identificar regiões e setores considerados chave. “Porque a adaptação é uma coisa que muda de região e de setor. Você pode ter uma adaptação no setor saúde, no Nordeste, totalmente diferente do Sul. Então, essa é uma política que o governo já está começando a traçar seriamente”.

O plano prevê análises de risco em setores como agricultura, saúde, recursos hídricos, regiões costeiras, grandes cidades. Ele está começando a ser traçado como uma estratégia de governo. Como as vulnerabilidades são diferentes, o plano não pode criar uma política única para o país. Na parte da segurança alimentar, em especial, José Marengo ressaltou a importância do conhecimento indígena, principalmente para os países mais pobres.

Marengo afiançou, entretanto, que esse plano não deverá ser concluído no curto prazo. “É uma coisa que leva tempo. Esse tipo de estudo não pode ser feito em um ou dois anos. É uma coisa de longo prazo, porque vai mudando continuamente. Ou seja, é um plano dinâmico, que a cada cinco anos tem que ser reavaliado e refeito. Poucos países têm feito isso, e o Brasil está começando a elaborar esse plano agora”, manifestou.

Marengo admitiu que a adaptação às mudanças climáticas tem que ter também um viés econômico, por meio da regulação. “Quando eu falo em adaptação, é uma mistura de conhecimento científico para identificar que área é vulnerável. Mas tudo isso vem acompanhado de coisas que não são climáticas, mas sim, econômicas, como custos e investimento. Porque adaptação custa dinheiro. Quem vai pagar pela adaptação? “, indagou.

O IPCC não tem uma posição a respeito, embora Marengo mencione que os países pobres querem que os ricos paguem pela sua adaptação às mudanças do clima. O tema deverá ser abordado na próxima reunião da 20ª Convenção-Quadro sobre Mudança do Clima COP-20, da Organização das Nações Unidas (ONU), que ocorrerá em Lima, no Peru, no final deste ano.

Entretanto, o IPCC aponta situações sobre o que está ocorrendo nas diversas partes do mundo, e o que poderia ser feito. As soluções, salientou, serão indicadas no próximo relatório do IPCC, cuja divulgação é aguardada para este mês. O relatório, segundo ele, apontará que “a solução está na mitigação”. Caso, por exemplo, da redução das emissões de gases de efeito estufa, o uso menor de combustíveis fósseis e maior uso de fontes de energia renováveis, novas opções de combustíveis, novas soluções de tecnologia, estabilização da população. “Tudo isso são coisas que podem ser consideradas”. Admitiu, porém, que são difíceis de serem alcançadas, porque alguns países estão dispostos a isso, outros não. “É uma coisa que depende de acordo mundial”.

De acordo com o relatório do IPCC, as tendências são de aumento da temperatura global, aumento e diminuição de precipitações (chuvas), degradação ambiental, risco para as áreas costeiras e a fauna marinha, mudança na produtividade agrícola, entre outras. A adaptação a essas mudanças depende do lugar e do contexto. A adaptação para um setor pode não ser aplicável a outro. As medidas visando a adaptação às mudanças climáticas devem ser tomadas pelos governos, mas também pela sociedade como um todo e pelos indivíduos, recomendam os cientistas que elaboraram o relatório.

Para o Nordeste brasileiro, por exemplo, a construção de cisternas pode ser um começo no sentido de adaptação à seca. Mas isso tem de ser uma busca permanente, destacou José Marengo. Observou que programas de reflorestamento são formas de mitigação e, em consequência, de adaptação, na medida em que reduzem as emissões e absorvem as emissões excedentes.

No Brasil, três aspectos se distinguem: segurança hídrica, segurança energética e segurança alimentar. As secas no Nordeste e as recentes enchentes no Norte têm ajudado a entender o problema da vulnerabilidade do clima, acrescentou o cientista. Disse que, de certa forma, o Brasil tem reagido para enfrentar os extremos. “Mas tem que pensar que esses extremos podem ser mais frequentes. A experiência está mostrando que alguns desses extremos devem ser pensados no longo prazo, para décadas”, salientou.

O biólogo Marcos Buckeridge, pesquisador do Instituto de Biociências da Universidade de São Paulo (USP) e membro do IPCC, lembrou que as queimadas na Amazônia, apesar de mostrarem redução nos últimos anos, ainda ocorrem com intensidade. “O Brasil é o país que mais queima floresta no mundo”, e isso leva à perda de muitas espécies animais e vegetais, trazendo, como resultado, impactos no clima.

Para a pesquisadora sênior do Centro de Estudos Integrados sobre Meio Ambiente e Mudanças Climáticas – Centro Clima da Universidade Federal do Rio de Janeiro, Carolina Burle Schmidt Dubeux, a economia da adaptação deve pensar o gerenciamento também do lado da demanda. Isso quer dizer que tem que englobar não só investimentos, mas também regulação econômica em que os preços reflitam a redução da oferta de bens. “Regulação econômica é muito importante para que a gente possa se adaptar [às mudanças do clima]. As políticas têm que refletir a escassez da água e da energia elétrica e controlar a demanda”, apontou.

Segundo a pesquisadora, a internalização de custos ambientais nos preços é necessária para que a população tenha maior qualidade de vida. “A questão da adaptação é um constante gerenciamento do risco das mudanças climáticas, que é desconhecido e imprevisível”, acrescentou. Carolina defendeu que para ocorrer a adaptação, deve haver uma comunicação constante entre o governo e a sociedade. “A mídia tem um papel relevante nesse processo”, disse.

(Agência Brasil)

* * *

Mudanças climáticas ameaçam produtos da cesta básica brasileira (O Globo)

JC e-mail 4925, de 02 de abril de 2014

Dieta será prejudicada por queda das safras e da atividade pesqueira

Os impactos das mudanças climáticas no país comprometerão o rendimento das safras de trigo, arroz, milho e soja, produtos fundamentais da cesta básica do brasileiro. Outro problema desembarca no litoral. Segundo prognósticos divulgados esta semana pelo Painel Intergovernamental de Mudanças Climáticas (IPCC), grandes populações de peixes deixarão a zona tropical nas próximas décadas, buscando regiões de alta latitude. Desta forma, a pesca artesanal também é afetada.

A falta de segurança alimentar também vai acometer outros países. Estima-se que a atividade agrícola da União Europeia caia significativamente até o fim do século. Duas soluções já são estudadas. Uma seria aumentar as importações – o Brasil seria um importante mercado, se conseguir nutrir a sua população e, além disso, desenvolver uma produção excedente. A outra possibilidade é a pesquisa de variedades genéticas que deem resistência aos alimentos diante das novas condições climáticas.

– Os eventos extremos, mesmo quando têm curta duração, reduzem o tamanho da safra – contou Marcos Buckeridge, professor do Departamento de Botânica da USP e coautor do relatório do IPCC, em uma apresentação realizada ontem na Academia Brasileira de Ciências. – Além disso, somos o país que mais queima florestas no mundo, e a seca é maior justamente na Amazônia Oriental, levando a perdas na agricultura da região.

O aquecimento global também enfraquecerá a segurança hídrica do país.

– É preciso encontrar uma forma de garantir a disponibilidade de água no semiárido, assim como estruturas que a direcione para as áreas urbanas – recomenda José Marengo, climatologista do Instituto Nacional de Pesquisas Espaciais (Inpe) e também autor do relatório.

Marengo lembra que o Nordeste enfrenta a estiagem há três anos. Segundo ele, o uso de carros-pipa é uma solução pontual. Portanto, outras medidas devem ser pensadas. A transposição do Rio São Francisco também pode não ser suficiente, já que a região deve passar por um processo de desertificação até o fim do século.

De acordo com um estudo realizado em 2009 por diversas instituições brasileiras, e que é citado no novo relatório do IPCC, as chuvas no Nordeste podem diminuir até 2,5mm por dia até 2100, causando perdas agrícolas em todos os estados da região. O déficit hídrico reduziria em 25% a capacidade de pastoreiro dos bovinos de corte. O retrocesso da pecuária é outro ataque à dieta do brasileiro.

– O Brasil perderá entre R$ 719 bilhões e R$ 3,6 trilhões em 2050, se nada fizer . Enfrentaremos perda agrícola e precisaremos de mais recursos para o setor hidrelétrico – alerta Carolina Dubeux, pesquisadora do Centro Clima da Coppe/UFRJ, que assina o documento. – A adaptação é um constante gerenciamento de risco.

(Renato Grandelle / O Globo)

* * *

Impactos mais graves no clima do país virão de secas e de cheias (Folha de S.Paulo)

JC e-mail 4925, de 02 de abril de 2014

Brasileiros em painel da ONU dizem que país precisa se preparar para problemas opostos em diferentes regiões

As previsões regionais do novo relatório do IPCC (painel do clima da ONU) aponta como principais efeitos da mudança climática no país problemas na disponibilidade de água, com secas persistentes em alguns pontos e cheias recordes em outros. Lançado anteontem no Japão, o documento do grupo de trabalho 2 do IPCC dá ênfase a impactos e vulnerabilidades provocados pelo clima ao redor do mundo. Além de listar os principais riscos, o documento ressalta a necessidade de adaptação aos riscos projetados. No Brasil, pela extensão territorial, os efeitos serão diferentes em cada região.

Além de afetar a floresta e seus ecossistemas, a mudança climática deve prejudicar também a geração de energia, a agricultura e até a saúde da população. “Tudo remete à água. Onde nós tivermos problemas com a água, vamos ter problemas com outras coisas”, resumiu Marcos Buckeridge, professor da USP e um dos autores do relatório do IPCC, em entrevista coletiva com outros brasileiros que participaram do painel.

Na Amazônia, o padrão de chuvas já vem sendo afetado. Atualmente, a cheia no rio Madeira já passa dos 25 m –nível mais alto da história– e afeta 60 mil pessoas. No Nordeste, que nos últimos anos passou por secas sucessivas, as mudanças climáticas podem intensificar os períodos sem chuva, e há um risco de que o semiárido vire árido permanentemente.

Segundo José Marengo, do Inpe (Instituto Nacional de Pesquisas Espaciais) e um dos autores principais do documento, ainda é cedo para saber se a seca persistente em São Paulo irá se repetir no ano que vem ou nos outros, mas alertou que é preciso que o Brasil se prepare melhor.

O IPCC fez previsões para diferentes cenários, mas, basicamente, indica que as consequências são mais graves quanto maiores os níveis de emissões de gases-estufa. “Se não dá para reduzir as ameaças, precisamos pelo menos reduzir os riscos”, disse Marengo, destacando que, no Brasil, nem sempre isso acontece. No caso das secas, a construção de cisternas e a mobilização de carros-pipa seriam alternativas de adaptação. Já nos locais onde deve haver aumento nas chuvas, a remoção de populações de áreas de risco, como as encostas, seria a alternativa.

Carolina Dubeux, da UFRJ, que também participa do IPCC, afirma que, para que haja equilíbrio entre oferta e demanda, é preciso que a economia reflita a escassez dos recursos naturais, sobretudo em áreas como agricultura e geração de energia. “É necessário que os preços reflitam a escassez de um bem. Se a água está escassa, o preço dela precisa refletir isso. Não podemos só expandir a oferta”, afirmou.

Neste relatório, caiu o grau de confiança sobre projeções para algumas regiões, sobretudo em países em desenvolvimento. Segundo Carlos Nobre, secretário do Ministério de Ciência, Tecnologia e Inovação, isso não significa que o documento tenha menos poder político ou científico.

Everton Lucero, chefe de clima no Itamaraty, diz que o documento será importante para subsidiar discussões do próximo acordo climático mundial. “Mas há um desequilíbrio entre os trabalhos científicos levados em consideração pelo IPCC, com muito mais ênfase no que é produzido nos países ricos. As nações em desenvolvimento também produzem muita ciência de qualidade, que deve ter mais espaço”, disse.

(Giuliana Miranda/Folha de S.Paulo)

* * *

Relatório do IPCC aponta riscos e oportunidades para respostas (Ascom do MCTI)

JC e-mail 4925, de 02 de abril de 2014

Um total de 309 cientistas de 70 países, entre coordenadores, autores, editores e revisores, foram selecionados para produzir o relatório

O novo relatório do Painel Intergovernamental sobre Mudanças Climáticas (IPCC) diz que os efeitos das mudanças climáticas já estão ocorrendo em todos os continentes e oceanos e que o mundo, em muitos casos, está mal preparado para os riscos. O documento também conclui que há oportunidades de repostas, embora os riscos sejam difíceis de gerenciar com os níveis elevados de aquecimento.

O relatório, intitulado Mudanças Climáticas 2014: Impactos, Adaptação e Vulnerabilidade, foi elaborado pelo Grupo de Trabalho 2 (GT 2) do IPCC e detalha os impactos das mudanças climáticas até o momento, os riscos futuros e as oportunidades para uma ação eficaz para reduzir os riscos. Os resultados foram apresentados à imprensa brasileira em entrevista coletiva no Rio de Janeiro nesta terça-feira (1º).

Um total de 309 cientistas de 70 países, entre coordenadores, autores, editores e revisores, foram selecionados para produzir o relatório. Eles contaram com a ajuda de 436 autores contribuintes e 1.729 revisores especialistas.

Os autores concluem que a resposta às mudanças climáticas envolve fazer escolhas sobre os riscos em um mundo em transformação, assinalando que a natureza dos riscos das mudanças climáticas é cada vez mais evidente, embora essas alterações também continuem a produzir surpresas. O relatório identifica as populações, indústrias e ecossistemas vulneráveis ao redor do mundo.

Segundo o documento, o risco da mudança climática provém de vulnerabilidade (falta de preparo), exposição (pessoas ou bens em perigo) e sobreposição com os riscos (tendências ou eventos climáticos desencadeantes). Cada um desses três componentes pode ser alvo de ações inteligentes para diminuir o risco.

“Vivemos numa era de mudanças climáticas provocadas pelo homem”, afirma o copresidente do GT 2 Vicente Barros, da Universidade de Buenos Aires, Argentina. “Em muitos casos, não estamos preparados para os riscos relacionados com o clima que já enfrentamos. Investimentos num melhor preparo podem melhorar os resultados, tanto para o presente e para o futuro.”

A adaptação para reduzir os riscos das mudanças climáticas começa a ocorrer, mas com um foco mais forte na reação aos acontecimentos passados do que na preparação para um futuro diferente, de acordo com outro copresidente do GT, Chris Field, da Carnegie Institution for Science, dos Estados Unidos.

“A adaptação às mudanças climáticas não é uma agenda exótica nunca tentada. Governos, empresas e comunidades ao redor do mundo estão construindo experiência com a adaptação”, explica Field. “Esta experiência constitui um ponto de partida para adaptações mais ousadas e ambiciosas, que serão importantes à medida que o clima e a sociedade continuam a mudar”.

Riscos futuros decorrentes das mudanças no clima dependem fortemente da quantidade de futuras alterações climáticas. Magnitudes crescentes de aquecimento aumentam a probabilidade de impactos graves e generalizados que podem ser surpreendentes ou irreversíveis.

“Com níveis elevados de aquecimento, que resultam de um crescimento contínuo das emissões de gases de efeito estufa, será um desafio gerenciar os riscos e mesmo investimentos sérios e contínuos em adaptação enfrentarão limites”, afirma Field.

Impactos observados da mudança climática já afetaram a agricultura, a saúde humana, os ecossistemas terrestres e marítimos, abastecimento de água e a vida de algumas pessoas. A característica marcante dos impactos observados é que eles estão ocorrendo a partir dos trópicos para os polos, a partir de pequenas ilhas para grandes continentes e dos países mais ricos para os mais pobres.

“O relatório conclui que as pessoas, sociedades e ecossistemas são vulneráveis em todo o mundo, mas com vulnerabilidade diferentes em lugares diferentes. As mudanças climáticas muitas vezes interagem com outras tensões para aumentar o risco”, diz Chris Field.

A adaptação pode desempenhar um papel-chave na redução destes riscos, observa Vicente Barros. “Parte da razão pela qual a adaptação é tão importante é que, devido à mudança climática, o mundo enfrenta uma série de riscos já inseridos no sistema climático, acentuados pelas emissões passadas e infraestrutura existente”.

Field acrescenta: “A compreensão de que a mudança climática é um desafio na gestão de risco abre um leque de oportunidades para integrar a adaptação com o desenvolvimento econômico e social e com as iniciativas para limitar o aquecimento futuro. Nós definitivamente enfrentamos desafios, mas compreender esses desafios e ultrapassá-los de forma criativa pode fazer da adaptação à mudança climática uma forma importante de ajudar a construir um mundo mais vibrante em curto prazo e além”.

O relatório do GT 2 é composto por dois volumes. O primeiro contém Resumo para Formuladores de Políticas, Resumo Técnico e 20 capítulos que avaliam riscos por setor e oportunidades para resposta. Os setores incluem recursos de água doce, os ecossistemas terrestres e oceânicos, costas, alimentos, áreas urbanas e rurais, energia e indústria, a saúde humana e a segurança, além dos meios de vida e pobreza.

Em seus dez capítulos, o segundo volume avalia os riscos e oportunidades para a resposta por região. Essas regiões incluem África, Europa, Ásia, Australásia (Austrália, a Nova Zelândia, a Nova Guiné e algumas ilhas menores da parte oriental da Indonésia), América do Norte, América Central e América do Sul, regiões polares, pequenas ilhas e oceanos.

Acesse a contribuição do grupo de trabalho (em inglês) aqui ou no site da instituição.

A Unidade de Apoio Técnico do GT 2 é hospedada pela Carnegie Institution for Science e financiada pelo governo dos Estados Unidos.

“O relatório do Grupo de Trabalho 2 é outro importante passo para a nossa compreensão sobre como reduzir e gerenciar os riscos das mudanças climáticas”, destaca o presidente do IPCC, RajendraPachauri. “Juntamente com os relatórios dos grupos 1 e 3, fornece um mapa conceitual não só dos aspectos essenciais do desafio climático, mas as soluções possíveis.”

O relatório do GT 1 foi lançado em setembro de 2013, e o do GT 3 será divulgado neste mês. O quinto relatório de avaliação (AR5) será concluído com a publicação de uma síntese em outubro.

O Painel Intergovernamental sobre Mudança do Clima é o organismo internacional para avaliar a ciência relacionada à mudança climática. Foi criado em 1988 pela Organização Meteorológica Mundial e pelo Programa das Nações Unidas para o Ambiente (Pnuma), para fornecer aos formuladores de políticas avaliações regulares da base científica das mudanças climáticas, seus impactos e riscos futuros, e opções para adaptação e mitigação.

Foi na 28ª Sessão do IPCC, realizada em abril de 2008, que os membros do painel decidiram preparar o AR5. O documento envolveu 837 autores e editores de revisão.

(Ascom do MCTI, com informações do IPCC)

Exoesqueleto do projeto ‘Walk Again’ funciona em teste documentado no Facebook (O Globo)

JC e-mail 4924, de 01 de abril de 2014

Voluntários já estão treinando em um laboratório de São Paulo para usar a veste robótica

Um paraplégico se levanta da cadeira de rodas, anda e dá o primeiro chute da Copa do Mundo de 2014 vestindo um exoesqueleto robótico controlado pela mente. O traje robótico complexo, construído a partir de ligas leves e alimentado por sistema hidráulico, foi construído por Gordon Cheng, da Universidade Técnica de Munique, e tem a função de trabalhar os músculos da perna paralisada.

O exoesqueleto é fruto de anos de trabalho de uma equipe internacional de cientistas e engenheiros do projeto “Walk Again”, liderado pelo brasileiro Miguel Nicolelis, que lança nesta terça-feira, em sua página no Facebook, a documentação do projeto até as vésperas da Copa do Mundo. Nicolelis já está treinando em um laboratório de São Paulo nove homens e mulheres paraplégicos, com idades de 20 a 40 anos, para usar o exoesqueleto. Três deles serão escolhidos para participar do jogo de abertura entre Brasil e Croácia.

No mês passado, a equipe de pesquisadores foi a jogos de futebol em São Paulo para verificar se a radiação do telefone móvel das multidões pode interferir com o processo. As ondas eletromagnéticas poderiam fazer o exoesqueleto se comportar mal, mas os testes foram animadores. As chances de mau funcionamento são poucas.

O voluntário que usar o exoesqueleto vestirá também um boné equipado com eletrodos para captar suas ondas cerebrais. Estes sinais serão transmitidos para um computador em uma mochila, onde serão descodificados e usados para mover condutores hidráulicos na roupa. O exoesqueleto é alimentado por uma bateria que permite a duas horas de uso contínuo.

Sob os pés do operador estarão placas com sensores para detectar quando o contato é feito com o solo. A cada pisada, um sinal dispara até um dispositivo vibratório costurado no antebraço da camisa do usuário. O dispositivo parece enganar o cérebro, que pensar que a sensação vem de seu pé. Em simulações de realidade virtual, os pacientes sentiram que suas pernas estavam se movendo e tocando alguma coisa. Em outros testes, os pacientes usaram o sistema para andar em uma esteira.

Relatório do IPCC sugere adaptação baseada em ecossistemas (Estado de S.Paulo)

JC e-mail 4923, de 31 de março de 2014

Modelo adotado no Brasil e região foi indicado como alternativa a infraestutura cara

Além das recomendações usuais para que os países invistam mais em infraestrutura para aumentar sua resiliência às mudanças climáticas, no novo relatório do Painel Intergovernamental sobre Mudanças Climáticas (IPCC), divulgado neste domingo, 30, ganhou espaço uma alternativa mais barata que pode, em alguns locais, conseguir efeitos parecidos: a adaptação baseada em ecossistemas.

O tema aparece em maior ou menor profundidade em cerca de metade dos capítulos e teve destaque especial no capítulo regional de América Central e do Sul, onde técnicas como criação de áreas protegidas, acordos para conservação e manejos comunitários de áreas naturais estão sendo testadas.

Mas o que isso tem a ver com adaptação? De acordo com o ecólogo Fabio Scarano, da Conservação Internacional, e um dos autores do capítulo, a ideia é fortalecer serviços ecossistêmicos que são fundamentais. Um ambiente bem preservado tem a capacidade de prover um clima estável, o fornecimento de água, a presença de polinizadores. “Como se fosse uma infraestrutura da própria natureza”, diz.

Como premissa, está a conservação da natureza aliada ao incentivo do seu uso sustentável – a fim também de evitar a pobreza, que é um dos principais motores da vulnerabilidade de populações.

“Normalmente quando se fala em adaptação se pensa na construção de grandes estruturas, como um dique, por exemplo, para evitar uma inundação. O que em geral é muito caro, mas em uma adaptação baseada em ecossistemas, conservar a natureza e usá-la bem é uma forma de diminuir a vulnerabilidade das pessoas às mudanças climáticas”, afirma.

Ele cita como exemplo uma região costeira em que o mangue tenha sido degradado. “Esse ecossistema funciona como uma barreira. Em um cenário de ressacas mais fortes, elevação do nível do mar, a costa vai ficar mais vulnerável, será necessário construir diques. Mas se mantém o mangue em pé e se oferece um auxílio para que as pessoas possam ter uma economia básica desse mangue, com técnicas mais sustentáveis, e elas recebam para mantê-lo assim, vai ser mais barato do que depois ter de fazer um dique.”

Segundo o pesquisador, para ser mais resiliente é importante acabar com a pobreza e preservar a natureza. “Se for possível ter os dois, a gente consegue o tão falado desenvolvimento sustentável”, opina.

(Giovana Girardi / Estado de S.Paulo),relatorio-do-ipcc-sugere-adaptacao-baseada-em-ecossistemas,1147134,0.htm

Outras matérias sobre o assunto:

O Globo
Painel da ONU apresenta medidas contra aquecimento global

Valor Econômico
Mudança do clima afeta a todos e está acontecendo agora, alerta IPCC

Esqueleto-robô da Copa usará técnica já criticada por criador (Folha de S.Paulo)

JC e-mail 4923, de 31 de março de 2014

Cientista Miguel Nicolelis muda método para fazer criança paraplégica dar chute inicial na competição

Na abertura da Copa do Mundo do Brasil, uma criança com lesão medular usando um exoesqueleto dará o pontapé inicial da competição. A demonstração pública será o primeiro resultado do projeto “Andar de Novo”, liderado pelo neurocientista brasileiro Miguel Nicolelis. Mas uma recente mudança na maneira como serão captados os sinais cerebrais que controlarão o exoesqueleto traz dúvidas sobre os avanços do projeto no campo da neurociência.

Em sua carreira, Nicolelis sempre fez uma defesa intransigente do implante de eletrodos dentro do cérebro para captar a atividade simultânea de neurônios individuais. Era crítico de métodos não invasivos, como a eletroencefalografia (EEG) –técnica desenvolvida no começo do século passado que usa eletrodos colocados no couro cabeludo para obter tais registros.

Até pelo menos maio do ano passado, Nicolelis ainda dava declarações públicas sobre o desenvolvimento de eletrodos para serem implantados. Mas a partir de outubro de 2013, passou a dizer que usaria sinais obtidos por EEG. Críticas a essa técnica estão em seu livro, em artigos e já rendeu até embates públicos.

Em artigo publicado em 2008 na revista “Scientific American Brasil” e assinado com John Chapin, Nicolelis diz: “Os sinais de EEG, no entanto, não podem ser usados diretamente em próteses de membros, pois mostram a atividade elétrica média de populações amplas de neurônios. É difícil extrair desses sinais as pequeníssimas variações necessárias para codificar movimentos precisos dos braços ou das mãos.”

Em um debate da Associação Americana para o Avanço da Ciência de 2013, o brasileiro dirigiu provocações a Todd Coleman, da Universidade da Califórnia em San Diego, que pesquisa a EEG para controlar próteses. Na ocasião, Nicolelis disse que “haverá aplicações de implantes invasivos porque eles são muito melhores do que dispositivos de superfície”.

Segundo Márcio Dutra Moraes, neurocientista da UFMG, a mudança de metodologia é uma modificação “conceitual em como abordar a questão”. Ele aponta que isso ocorreu não porque a EEG é melhor, mas porque a proposta original era “estupidamente mais complexa” e o uso da EEG simplifica muito as coisas, ainda que não traga nenhum avanço substancial. Segundo Moraes, a mudança “certamente se deu pela impossibilidade de resolver de forma satisfatória e ética o projeto inicial dentro do limite de tempo imposto pela Copa”.

Segundo um cientista com experiência internacional que não quis se identificar, o projeto atual, como será apresentado na Copa, não justificaria os R$ 33 milhões investidos pelo governo.

Edward Tehovnik, pesquisador do Instituto do Cérebro da UFRN, chegou a trabalhar com Nicolelis, mas rompeu com o cientista, que o demitiu. Ele questiona quanto da demonstração de junho será controlada pelo exoesqueleto e quanto será controlada pelo cérebro da criança.

“Minha análise, baseada nos dados publicados, sugere que menos de 1% do sinal virá do cérebro da criança. Os outros 99% virão do robô”. E ele pergunta: “Será mesmo a criança paralisada que vai chutar a bola?”.

Sergio Neuenschwander, professor titular da UFRN, diz que a opção pelo EEG é uma mudança muito profunda no projeto original. Ele diz que é possível usar sinais de EEG para dar comandos ao robô, mas isso é diferente de obter o que seria o código neural de andar, sentar, chutar etc.

“O fato de ele ter optado por uma mudança de técnica mostra o tamanho do desafio pela frente.”

(Fernando Tadeu Moraes/Folha de S.Paulo)