Arquivo da tag: Tecnofetichismo

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Negacionismo de sapatênis (Folha de S.Paulo)

Não é com desinformação que o jornalismo contribuirá ao tema do clima

Thiago Amparo – artigo original aqui.

11.ago.2021 às 22h05

A perversidade do negacionismo recai em jurar que se está dizendo o contrário do que de fato se diz. Nesta novilíngua, negacionismo veste o sapatênis do antialarmismo. Chega a ser tedioso, posto que mofado, o argumento de Leandro Narloch nesta Folha na terça (10). Mofado pois —como relata Michael Mann em “The New Climate War”— não passa da mesma retórica negacionista 2.0.

Em essência, Narloch defende que há atividades nocivas ao clima que devem ser “celebradas e difundidas” por nos tornar “menos vulneráveis à natureza”. Narloch está cientificamente errado. E o faz subscrevendo a uma das formas mais nefárias de negacionismo: mascara-o, vendendo soluções que não só não são capazes de mitigar e adaptar as sociedades à crise climática como possuem o efeito adverso. Implode-se a Amazônia para salvá-la, eis o argumento.

Esses e outros discursos negacionistas já tinham sido mapeados na revista Global Sustaintability, de Cambridge, em julho de 2020: não são novos. Em vez de mexer em tabus do século 21, vendem-se inverdades como se ciência fosse. Narloch erra no conceito de vulnerabilidade: dos incêndios florestais na Califórnia às inundações na Alemanha, não estamos protegidos contra a natureza porque nela estamos inseridos. Ignora, ademais, a vasta literatura do Painel do Clima sobre vulnerabilidade.

Narloch desconsidera o conceito da ciência climática de “feedback loops”: a crise climática aciona uma série de gatilhos de dimensão incalculável, uma reação de cadeia nunca vista. Destruir o clima não nos protegerá do clima, porque é a ausência de uma mudança drástica energética que tem aprofundado a crise climática. É ineficiente o investir no contrário.

Se o relatório do Painel do Clima acendeu o sinal vermelho, não é com desinformação que o jornalismo contribuirá ao tema. Pluralismo é um rio onde as ideias se movem dentro das margens da verdade e da ciência. Não reclamem quando o rio secar, implodindo as margens que o jornalismo deveria ter protegido.

Bill Gates e o problema com o solucionismo climático (MIT Technology Review)

Bill Gates e o problema com o solucionismo climático

Natureza e espaço

Focar em soluções tecnológicas para mudanças climáticas parece uma tentativa para se desviar dos obstáculos políticos mais desafiadores.

By MIT Technology Review, 6 de abril de 2021

Em seu novo livro Como evitar um desastre climático, Bill Gates adota uma abordagem tecnológica para compreender a crise climática. Gates começa com os 51 bilhões de toneladas de gases com efeito de estufa criados por ano. Ele divide essa poluição em setores com base em seu impacto, passando pelo elétrico, industrial e agrícola para o de transporte e construção civil. Do começo ao fim, Gates se mostra  adepto a diminuir as complexidades do desafio climático, dando ao leitor heurísticas úteis para distinguir maiores problemas tecnológicos (cimento) de menores (aeronaves).

Presente nas negociações climáticas de Paris em 2015, Gates e dezenas de indivíduos bem-afortunados lançaram o Breakthrough Energy, um fundo de capital de investimento interdependente lobista empenhado em conduzir pesquisas. Gates e seus companheiros investidores argumentaram que tanto o governo federal quanto o setor privado estão investindo pouco em inovação energética. A Breakthrough pretende preencher esta lacuna, investindo em tudo, desde tecnologia nuclear da próxima geração até carne vegetariana com sabor de carne bovina. A primeira rodada de US$ 1 bilhão do fundo de investimento teve alguns sucessos iniciais, como a Impossible Foods, uma fabricante de hambúrgueres à base de plantas. O fundo anunciou uma segunda rodada de igual tamanho em janeiro.

Um esforço paralelo, um acordo internacional chamado de Mission Innovation, diz ter convencido seus membros (o setor executivo da União Europeia junto com 24 países incluindo China, os EUA, Índia e o Brasil) a investirem um adicional de US$ 4,6 bilhões por ano desde 2015 para a pesquisa e desenvolvimento da energia limpa.

Essas várias iniciativas são a linha central para o livro mais recente de Gates, escrito a partir de uma perspectiva tecno-otimista. “Tudo que aprendi a respeito do clima e tecnologia me deixam otimista… se agirmos rápido o bastante, [podemos] evitar uma catástrofe climática,” ele escreveu nas páginas iniciais.

Como muitos já assinalaram, muito da tecnologia necessária já existe, muito pode ser feito agora. Por mais que Gates não conteste isso, seu livro foca nos desafios tecnológicos que ele acredita que ainda devem ser superados para atingir uma maior descarbonização. Ele gasta menos tempo nos percalços políticos, escrevendo que pensa “mais como um engenheiro do que um cientista político.” Ainda assim, a política, com toda a sua desordem, é o principal impedimento para o progresso das mudanças climáticas. E engenheiros devem entender como sistemas complexos podem ter ciclos de feedback que dão errado.

Sim, ministro

Kim Stanley Robinson, este sim pensa como um cientista político. O começo de seu romance mais recente The Ministry for the Future (ainda sem tradução para o português), se passa apenas a alguns anos no futuro, em 2025, quando uma onda de calor imensa atinge a Índia, matando milhões de pessoas. A protagonista do livro, Mary Murphy, comanda uma agência da ONU designada a representar os interesses das futuras gerações em uma tentativa de unir os governos mundiais em prol de uma solução climática. Durante todo o livro a equidade intergeracional e várias formas de políticas distributivas em foco.

Se você já viu os cenários que o Painel Intergovernamental sobre Mudanças Climáticas (IPCC) desenvolve para o futuro, o livro de Robinson irá parecer familiar. Sua história questiona as políticas necessárias para solucionar a crise climática, e ele certamente fez seu dever de casa. Apesar de ser um exercício de imaginação, há momentos em que o romance se assemelha mais a um seminário de graduação sobre ciências sociais do que a um trabalho de ficção escapista. Os refugiados climáticos, que são centrais para a história, ilustram a forma como as consequências da poluição atingem a população global mais pobre com mais força. Mas os ricos produzem muito mais carbono.

Ler Gates depois de Robinson evidencia a inextricável conexão entre desigualdade e mudanças climáticas. Os esforços de Gates sobre a questão do clima são louváveis. Mas quando ele nos diz que a riqueza combinada das pessoas apoiando seu fundo de investimento é de US$ 170 bilhões, ficamos um pouco intrigados que estes tenham dedicado somente US$ 2 bilhões para soluções climáticas, menos de 2% de seus ativos. Este fato por si só é um argumento favorável para taxar fortunas: a crise climática exige ação governamental. Não pode ser deixado para o capricho de bilionários.

Quanto aos bilionários, Gates é possivelmente um dos bonzinhos. Ele conta histórias sobre como usa sua fortuna para ajudar os pobres e o planeta. A ironia dele escrever um livro sobre mudanças climáticas quando voa em um jato particular e detém uma mansão de 6.132 m² não é algo que passa despercebido pelo leitor, e nem por Gates, que se autointitula um “mensageiro imperfeito sobre mudanças climáticas”. Ainda assim, ele é inquestionavelmente um aliado do movimento climático.

Mas ao focar em inovações tecnológicas, Gates minimiza a participação dos combustíveis fósseis na obstrução deste progresso. Peculiarmente, o ceticismo climático não é mencionado no livro. Lavando as mãos no que diz respeito à polarização política, Gates nunca faz conexão com seus colegas bilionários Charles e David Koch, que enriqueceram com os petroquímicos e têm desempenhado papel de destaque na reprodução do negacionismo climático.

Por exemplo, Gates se admira que para a vasta maioria dos americanos aquecedores elétricos são na verdade mais baratos do que continuar a usar combustíveis fósseis. Para ele, as pessoas não adotarem estas opções mais econômicas e sustentáveis é um enigma. Mas, não é assim. Como os jornalistas Rebecca Leber e Sammy Roth reportaram em  Mother Jones  e no  Los Angeles Times, a indústria do gás está investindo em defensores e criando campanhas de marketing para se opor à eletrificação e manter as pessoas presas aos combustíveis fósseis.

Essas forças de oposição são melhor vistas no livro do Robinson do que no de Gates. Gates teria se beneficiado se tivesse tirado partido do trabalho que Naomi Oreskes, Eric Conway, Geoffrey Supran, entre outros, têm feito para documentar os esforços persistentes das empresas de combustíveis fósseis em semear dúvida sobre a ciência climática para a população.

No entanto, uma coisa que Gates e Robinson têm em comum é a opinião de que a geoengenharia, intervenções monumentais para combater os sintomas ao invés das causas das mudanças climáticas, venha a ser inevitável. Em The Ministry for the Future, a geoengenharia solar, que vem a ser a pulverização de partículas finas na atmosfera para refletir mais do calor solar de volta para o espaço, é usada na sequência dos acontecimentos da onda de calor mortal que inicia a história. E mais tarde, alguns cientistas vão aos polos e inventam elaborados métodos para remover água derretida de debaixo de geleiras para evitar que avançasse para o mar. Apesar de alguns contratempos, eles impedem a subida do nível do mar em vários metros. É possível imaginar Gates aparecendo no romance como um dos primeiros a financiar estes esforços. Como ele próprio observa em seu livro, ele tem investido em pesquisa sobre geoengenharia solar há anos.

A pior parte

O título do novo livro de Elizabeth Kolbert, Under a White Sky (ainda sem tradução para o português), é uma referência a esta tecnologia nascente, já que implementá-la em larga escala pode alterar a cor do céu de azul para branco.
Kolbert observa que o primeiro relatório sobre mudanças climáticas foi parar na mesa do presidente Lyndon Johnson em 1965. Este relatório não argumentava que deveríamos diminuir as emissões de carbono nos afastando de combustíveis fósseis. No lugar, defendia mudar o clima por meio da geoengenharia solar, apesar do termo ainda não ter sido inventado. É preocupante que alguns se precipitem imediatamente para essas soluções arriscadas em vez de tratar a raiz das causas das mudanças climáticas.

Ao ler Under a White Sky, somos lembrados das formas com que intervenções como esta podem dar errado. Por exemplo, a cientista e escritora Rachel Carson defendeu importar espécies não nativas como uma alternativa a utilizar pesticidas. No ano após o seu livro Primavera Silenciosa ser publicado, em 1962, o US Fish and Wildlife Service trouxe carpas asiáticas para a América pela primeira vez, a fim de controlar algas aquáticas. Esta abordagem solucionou um problema, mas criou outro: a disseminação dessa espécie invasora ameaçou às locais e causou dano ambiental.

Como Kolbert observa, seu livro é sobre “pessoas tentando solucionar problemas criados por pessoas tentando solucionar problemas.” Seu relato cobre exemplos incluindo esforços malfadados de parar a disseminação das carpas, as estações de bombeamento em Nova Orleans que aceleram o afundamento da cidade e as tentativas de seletivamente reproduzir corais que possam tolerar temperaturas mais altas e a acidificação do oceano. Kolbert tem senso de humor e uma percepção aguçada para consequências não intencionais. Se você gosta do seu apocalipse com um pouco de humor, ela irá te fazer rir enquanto Roma pega fogo.

Em contraste, apesar de Gates estar consciente das possíveis armadilhas das soluções tecnológicas, ele ainda enaltece invenções como plástico e fertilizante como vitais. Diga isso para as tartarugas marinhas engolindo lixo plástico ou as florações de algas impulsionadas por fertilizantes destruindo o ecossistema do Golfo do México.

Com níveis perigosos de dióxido de carbono na atmosfera, a geoengenharia pode de fato se provar necessária, mas não deveríamos ser ingênuos sobre os riscos. O livro de Gates tem muitas ideias boas e vale a pena a leitura. Mas para um panorama completo da crise que enfrentamos, certifique-se de também ler Robinson e Kolbert.

The Petabyte Age: Because More Isn’t Just More — More Is Different (Wired)

WIRED Staff, Science, 06.23.2008 12:00 PM

Introduction: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. […]

petabyte age
Marian Bantjes

Introduction:

Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

The End of Theory:

The Data Deluge Makes the Scientific Method Obsolete

Feeding the Masses:
Data In, Crop Predictions Out

Chasing the Quark:
Sometimes You Need to Throw Information Away

Winning the Lawsuit:
Data Miners Dig for Dirt

Tracking the News:
A Smarter Way to Predict Riots and Wars

__Spotting the Hot Zones: __
Now We Can Monitor Epidemics Hour by Hour

__ Sorting the World:__
Google Invents New Way to Manage Data

__ Watching the Skies:__
Space Is Big — But Not Too Big to Map

Scanning Our Skeletons:
Bone Images Show Wear and Tear

Tracking Air Fares:
Elaborate Algorithms Predict Ticket Prices

Predicting the Vote:
Pollsters Identify Tiny Voting Blocs

Pricing Terrorism:
Insurers Gauge Risks, Costs

Visualizing Big Data:
Bar Charts for Words

Big data and the end of theory? (The Guardian)

theguardian.com

Mark Graham, Fri 9 Mar 2012 14.39 GM

Does big data have the answers? Maybe some, but not all, says Mark Graham

In 2008, Chris Anderson, then editor of Wired, wrote a provocative piece titled The End of Theory. Anderson was referring to the ways that computers, algorithms, and big data can potentially generate more insightful, useful, accurate, or true results than specialists or
domain experts who traditionally craft carefully targeted hypotheses
and research strategies.

This revolutionary notion has now entered not just the popular imagination, but also the research practices of corporations, states, journalists and academics. The idea being that the data shadows and information trails of people, machines, commodities and even nature can reveal secrets to us that we now have the power and prowess to uncover.

In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.

It is quite likely that you yourself have been the unwitting subject of a big data experiment carried out by Google, Facebook and many other large Web platforms. Google, for instance, has been able to collect extraordinary insights into what specific colours, layouts, rankings, and designs make people more efficient searchers. They do this by slightly tweaking their results and website for a few million searches at a time and then examining the often subtle ways in which people react.

Most large retailers similarly analyse enormous quantities of data from their databases of sales (which are linked to you by credit card numbers and loyalty cards) in order to make uncanny predictions about your future behaviours. In a now famous case, the American retailer, Target, upset a Minneapolis man by knowing more about his teenage daughter’s sex life than he did. Target was able to predict his daughter’s pregnancy by monitoring her shopping patterns and comparing that information to an enormous database detailing billions of dollars of sales. This ultimately allows the company to make uncanny
predictions about its shoppers.

More significantly, national intelligence agencies are mining vast quantities of non-public Internet data to look for weak signals that might indicate planned threats or attacks.

There can by no denying the significant power and potentials of big data. And the huge resources being invested in both the public and private sectors to study it are a testament to this.

However, crucially important caveats are needed when using such datasets: caveats that, worryingly, seem to be frequently overlooked.

The raw informational material for big data projects is often derived from large user-generated or social media platforms (e.g. Twitter or Wikipedia). Yet, in all such cases we are necessarily only relying on information generated by an incredibly biased or skewed user-base.

Gender, geography, race, income, and a range of other social and economic factors all play a role in how information is produced and reproduced. People from different places and different backgrounds tend to produce different sorts of information. And so we risk ignoring a lot of important nuance if relying on big data as a social/economic/political mirror.

We can of course account for such bias by segmenting our data. Take the case of using Twitter to gain insights into last summer’s London riots. About a third of all UK Internet users have a twitter profile; a subset of that group are the active tweeters who produce the bulk of content; and then a tiny subset of that group (about 1%) geocode their tweets (essential information if you want to know about where your information is coming from).

Despite the fact that we have a database of tens of millions of data points, we are necessarily working with subsets of subsets of subsets. Big data no longer seems so big. Such data thus serves to amplify the information produced by a small minority (a point repeatedly made by UCL’s Muki Haklay), and skew, or even render invisible, ideas, trends, people, and patterns that aren’t mirrored or represented in the datasets that we work with.

Big data is undoubtedly useful for addressing and overcoming many important issues face by society. But we need to ensure that we aren’t seduced by the promises of big data to render theory unnecessary.

We may one day get to the point where sufficient quantities of big data can be harvested to answer all of the social questions that most concern us. I doubt it though. There will always be digital divides; always be uneven data shadows; and always be biases in how information and technology are used and produced.

And so we shouldn’t forget the important role of specialists to contextualise and offer insights into what our data do, and maybe more importantly, don’t tell us.

Mark Graham is a research fellow at the Oxford Internet Institute and is one of the creators of the Floating Sheep blog

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired)

wired.com

Chris Anderson, Science, 06.23.2008 12:00 PM


Illustration: Marian Bantjes “All models are wrong, but some are useful.”

So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all.

Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.

The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.

At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.

Speaking at the O’Reilly Emerging Technology Conference this past March, Peter Norvig, Google’s research director, offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.”

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

The big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.

Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.

But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the “beautiful story” phase of a discipline starved of data) is that we don’t know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

Now biology is heading in the same direction. The models we were taught in school about “dominant” and “recessive” genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton’s laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility.

In short, the more we learn about biology, the further we find ourselves from a model that can explain it.

There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.

This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM’s Tivoli and open source versions of Google File System and MapReduce.111 Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.

Learning to use a “computer” of this scale may be challenging. But the opportunity is great: The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?

Chris Anderson (canderson@wired.com) is the editor in chief of Wired.

Related The Petabyte Age: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

Correction:
1 This story originally stated that the cluster software would include the actual Google File System.
06.27.08

Inteligência artificial já imita Guimarães Rosa e pode mudar nossa forma de pensar (Folha de S.Paulo)

www1.folha.uol.com.br

Hermano Vianna Antropólogo, escreve no blog hermanovianna.wordpress.com

22 de agosto de 2020


[resumo] Espantado com as proezas de tecnologias capazes de produzir textos, até mesmo criando propostas a partir de frase de Guimarães Rosa, antropólogo analisa os impactos gerados pela inteligência artificial, aponta dilemas éticos relativos a seu uso, teme pelo aumento da dependência em relação aos países produtores de softwares e almeja que as novas práticas façam florescer no Brasil modos mais diversos e colaborativos de pensar.

GPT-3 é o nome da nova estrela da busca por IA (inteligência artificial). Foi lançado em maio deste ano pela OpenAI, companhia que vai completar cinco anos desde sua fundação bilionária financiada por, entre outros, Elon Musk.

Até agora, o acesso a sua já lendária gigacapacidade de geração de textos surpreendentes, sobre qualquer assunto, é privilégio de pouca gente rica e poderosa. Há, contudo, atalhos divertidos para pobres mortais: um deles é o jogo “AI Dungeon” (masmorra de IA), criação de um estudante mórmon, que desde julho funciona com combustível GPT-3.

O objetivo dos jogadores é criar obras literárias de ficção com ajuda desse modelo de IA. A linguagem de partida é o inglês, mas usei português, e o bichinho teve jogo de cintura admirável para driblar minha pegadinha.

Fui até mais implicante. Não usei apenas português, usei Guimarães Rosa. Copiei e colei, da primeira página de “Grande Sertão: Veredas”: “Alvejei mira em árvore, no quintal, no baixo do córrego”. O “AI Dungeon”, que até aquele ponto estava falando inglês, pegou a deixa e continuou assim: “Uma fogueira crepitante brinca e lambiça em torno de um lindo carvalho”.

Tudo bem, Rosa nunca escreveria essa frase. Fiz uma busca: crepitar não aparece em nenhum momento de “Grande Sertão: Veredas”, e carvalho não costuma ser vizinho de buritis. Porém, o GPT-3 entendeu que precisava mudar de língua para jogar comigo e resolveu arriscar: uma fogueira não fica deslocada no meu quintal, ainda mais uma fogueira brincante. E fez o favor de confundir Rosa com James Joyce, inventando o verbo lambiçar, que meu corretor ortográfico não reconhece, talvez para sugerir uma lambida caprichada ou sutilmente gulosa.

Fiquei espantado. Não é todo dia que recebo uma resposta tão desconcertante. Fiz outra busca, aproveitando os serviços do Google: não há registro da frase completa que o “AI Dungeon” propôs. Foi realmente uma criação original. Uma criação “bem criativa”.

(E testei Joyce também: quando inseri, de “Ulysses”, sampleado igualmente de sua primeira página, “Introibo ad altare Dei”, o jogo foi apenas um pouco menos surpreendente, mandou de volta a tradução do latim para o inglês.)

Originalidade. Criatividade. A combinação disso tudo parece mesmo atributo de um ser inteligente, que tem consciência do que está fazendo ou pensando.

Pelo que entendo, já que minha pouca inteligência não é muito treinada nessa matéria, o GPT-3, certamente o mais parrudo modelo de geração artificial de textos com intenção de pé e cabeça, tem uma maneira muito especial de pensar, que não sou capaz de diferenciar daquilo que acontece entre nossos neurônios: seu método é estatístico, probabilístico.

Está fundamentado na análise de uma quantidade avassaladora de textos, quase tudo que existe na internet, em várias línguas, inclusive linguagens de computador. Sua estratégia mais simples, e certamente estou simplificando muito, é identificar quais palavras costumam aparecer com mais frequência depois de outras. Assim, em suas respostas, chuta o que no seu “pensamento” parecem ser as respostas mais “prováveis”.

Claro que não “sabe” do que está falando. Talvez, no meu teste Rosa, se tivesse escrito peixe, no lugar do carvalho poderia surgir um “lindo tubarão”; e isso não significaria que essa IA entenda profundamente a distinção peixe-árvore.

Mas qual profundidade o entendimento precisa atingir para ser reconhecido como verdadeiramente inteligente? E o chute não é, afinal, uma característica corriqueira dos artifícios da nossa IA? Não estou chutando aqui descaradamente, falando daquilo que não domino, ou não entendo?

Não estou escrevendo isto para tentar definir o que é inteligência ou consciência; melhor voltarmos a um território mais concreto: a probabilidade. Há algo de inusitado em uma fogueira que brinca. Não deve ser tão comum assim essa associação de ideias ou palavras, mas árvore remeter a carvalho deve indicar um treinamento de “machine learning” (aprendizado de máquina) que não aconteceu no Brasil.

Outros “pés de quê” são por aqui estatisticamente mais prováveis de despontarem em nossas memórias “nacionais” quando penetram no reino vegetal. Estou pensando, é claro, em tema bem batido do debate sobre IA: o “bias”, ou viés, inevitável em seus modelos, consequência dos dados que alimentaram seu aprendizado, não importa quão “deep learning” (aprendizagem profunda) tenha sido.
São conhecidos os exemplos mais preconceituosos, como o da IA de identificação de fotos que classificou pessoas negras como gorilas, pois no seu treinamento a quase totalidade dos seres humanos que “viu” era gente branca. Problema dos bancos de dados? É preciso ir mais “deep”.

Então, lembro-me do primeiro artigo assinado por Kai Fu Lee, empresário baseado na China, que li no jornal The New York Times. Resumo: na corrida pela IA, EUA e China ocupam as primeiras posições, muito na frente dos demais países. Poucas grandes companhias serão vencedoras.

Cada avanço exige muitos recursos, inclusive energéticos tradicionais, vide o consumo insustentável de eletricidade para o GPT-3 aprender a “lambiçar”. Muitos empregos vão sumir. Todo o mundo precisará de algo como “renda universal”. De onde virá o dinheiro?

Resposta assustadora de Kai Fu Lee, em tradução do Google Translator, sem minhas correções: “Portanto, se a maioria dos países não for capaz de tributar a IA ultra-lucrativa, empresas para subsidiar seus trabalhadores, que opções eles terão? Prevejo apenas um: a menos que desejem mergulhar seu povo na pobreza, serão forçados a negociar com o país que fornecer a maior parte de seu IA software —China ou Estados Unidos— para se tornar essencialmente dependente econômico desse país, recebendo subsídios de bem-estar em troca de deixar a nação ‘mãe’ IA. as empresas continuam lucrando com os usuários do país dependente. Tais arranjos econômicos remodelariam as alianças geopolíticas de hoje”.

Apesar dos muitos erros, a conclusão é bem compreensível: uma nova teoria da dependência. Eis o pós-colonialismo, ou o cibercolonialismo, como destino inevitável para a humanidade?

Isso sem tocar em algo central no pacote a ser negociado: a colônia se submeterá também ao conjunto de “bias” da “nação ‘mãe’ IA”. Prepare-se: florestas de carvalho, sem buritis.

Recentemente, mas antes do “hype” do GPT-3, o mesmo Kai Fu Lee virou notícia dando nota B- para a atuação da IA durante a pandemia. Ele passou sua quarentena em Pequim. Diz que entregadores de suas compras foram sempre robôs —e, pelo que vi na temporada 2019 do Expresso Futuro, gravada por Ronaldo Lemos e companhia na China, eu acredito.

Ficou decepcionado, todavia, com a falta de protagonismo do “machine learning” no desenvolvimento de vacinas e tratamentos. Eu, com minha ousadia pouco preparada, chutaria nota semelhante, talvez C+, para seguir o viés universidade americana.

Aplaudi, por exemplo, quando a IBM liberou os serviços do Watson para organizações em seu combate contra o coronavírus. Ou quando empresas gigantes, como Google e Amazon, proibiram o uso de suas tecnologias de reconhecimento facial depois das manifestações antirracistas pelo mundo todo.

No entanto, empresas menores, com IAs de vigilância não menos potentes, aproveitaram a falta de concorrência para aumentar sua clientela. E vimos como os aplicativos de rastreamento de contatos e contaminações anunciam a transparência totalitária de todos os nossos movimentos, através de algoritmos que já tornaram obsoletas antigas noções de privacidade.

Tudo bem assustador, para quem defende princípios democráticos. Contudo, nem o Estado mais autoritário terá garantia de controle de seus próprios segredos.

Esses problemas são reconhecidos por toda a comunidade de desenvolvedores de IA. Há muitos grupos —como The Partnership on AI, que inclui da OpenAI a Electronic Frontier Foundation— que se dedicam há anos ao debate sobre as questões éticas do uso da inteligência artificial.

Debate extremamente complexo e cheio de becos perigosos, como demonstra a trajetória de Mustafa Suleyman, uma das personalidades mais fascinantes do século 21. Ele foi um dos três fundadores da DeepMind, a empresa britânica, depois comprada pelo Google, que criou aquela IA famosa que venceu o campeão mundial de Go, jogo de tabuleiro criado na China há mais de 2.500 anos.

As biografias do trio poderiam inspirar filmes ou séries. Demis Hassabis tem pai grego-cipriota e mãe de Singapura; Shane Legg nasceu no norte da Nova Zelândia; e Mustafa Suleyman é filho de sírio taxista imigrante em Londres.

A história de Suleyman pré-DeepMind é curiosa: enquanto estudava na Universidade de Oxford, montou um serviço telefônico para cuidar da saúde mental de jovens muçulmanos. Depois foi consultor para resolução de conflitos. No mundo da IA —hoje cuida de “policy” no Google— nunca teve papas na língua. Procure por suas palestras e entrevistas no YouTube: sempre tocou em todas as feridas, como se fosse crítico de fora, mas com lugar de fala do centro mais poderoso.

Gosto especialmente de sua palestra na Royal Society, com seu estilo pós-punk e apresentado pela princesa Ana. Mesmo assim, com toda sua consciência política muito clara e preocupações éticas que me parecem muito sinceras, Mustafa Suleyman se viu metido em um escândalo que envolve a acusação de uso sem autorização de dados de pacientes do NHS (serviço britânico de saúde pública) para desenvolvimento de aplicativos que pretendiam ajudar a monitorar doentes hospitalares em estado crítico.

Foram muitas as explicações da DeepMind, do Google e do NHS. Exemplo de problemas com os quais vamos viver cada vez mais e que precisam de novos marcos regulatórios para determinar que algoritmos podem se meter com nossas vidas —e, sobretudo, quem vai entender o que pode um algoritmo e o que pode a empresa dona desse algoritmo.

Uma coisa já aprendi, pensando nesse tipo de problema: diversidade não é importante apenas nos bancos de dados usados em processos de “machine learning”, mas também nas maneiras de cada IA “pensar” e nos sistemas de segurança para auditar os algoritmos que moldam esses pensamentos.

Essa necessidade tem sido mais bem explorada nas experiências que reúnem desenvolvedores de IA e artistas. Acompanho com enorme interesse o trabalho de Kenric Mc Dowell, que cuida da aproximação de artistas com os laboratórios de “machine learning” do Google.

Seus trabalhos mais recentes investem na possibilidade de existência de inteligências não humanas e na busca de colaboração entre tipos diferentes de inteligências e modos de pensar, incluindo a inspiração nas cosmotécnicas do filósofo chinês Yuk Hui, que andou pela Paraíba e pelo Rio de Janeiro no ano passado.

Na mesma trilha, sigo a evolução da prática em artes e robótica de Ken Goldberg, professor da Universidade da Califórnia em Berkeley. Ele publicou um artigo no Wall Street Journal em 2017 defendendo a ideia que se tornou meu lema atual: esqueçam a singularidade, viva a multiplicidade.

Através de Ken Goldberg também aprendi o que é floresta randômica (“random forest”), método de “machine learning” que usa não apenas um algoritmo, mas uma mata atlântica de algoritmos, de preferência cada um pensando de um jeito, com decisões tomadas em conjunto, procurando, assim, entre outras vantagens, evitar vieses “individuais”.

Minha utopia desesperada de Brasil: que a “random forest” seja aqui cada vez mais verdejante. Com desenvolvimento de outras IAs, ou IAs realmente outras. Inteligências artificiais antropófagas. GPTs-n ao infinito, capazes de pensar nas 200 línguas indígenas que existem/resistem por aqui. Chatbots que façam rap com sotaque tecnobrega paraense, anunciando as fórmulas para resolução de todos os problemas alimentares da humanidade.

Inteligência não nos falta. Inteligência como a da jovem engenheira Marianne Linhares, que saiu da graduação da Universidade Federal de Campina Grande e foi direto para a DeepMind de Londres.

Em outro mundo possível, poderia continuar por aqui, colaborando com o pessoal de “machine learning” da UFPB (e via Github com o mundo todo), talvez inventando uma IA que realmente entenda a literatura de Guimarães Rosa. Ou que possa responder à pergunta de Meu Tio o Iauaretê, ”você sabe o que onça pensa?”, pensando igual a uma onça. Bom. Bonito.

Cientistas planejam a ressurreição digital com bots e humanoides (Canal Tech)

Por Natalie Rosa | 25 de Junho de 2020 às 16h40 Reprodução

Em fevereiro deste ano, o mundo todo se surpreendeu com a história de Jang Ji-sung, uma sul-coreana que “reencontrou” a sua filha, já falecida, graças à inteligência artificial. A garota morreu em 2016 devido a uma doença sanguínea.

No encontro simulado, a imagem da pequena Nayeon é exibida para a mãe que está em um fundo verde, também conhecido como chroma key, usando um headset de realidade virtual. A interação não foi só visual, como também foi possível conversar e brincar com a criança. Segundo Jang, a experiência foi como um sonho que ela sempre quis ter.

Encontro de Jang Ji-sung com a forma digitalizada da filha (Imagem: Reprodução)

Por mais que pareça uma tendência difícil de ser executada em massa na vida real, além de ser uma preocupação bastante antiga das produções de ficção científica, existem pessoas interessadas nesta forma de imortalidade. A questão que fica, no entanto, é se devemos fazer isso e como irá acontecer.

Em entrevista ao CNET, John Troyer, diretor do Centre for Death and Society (Centro para Morte e Sociedade) da Universidade de Bath, na Inglaterra, e autor do livro Technologies of the Human Corpse, conta que o interesse mais moderno pela imortalidade começou ainda na década de 1960. Na época, muitas pessoas acreditavam na ideia do processo criônico de preservação de corpos, quando um cadáver ou apenas uma cabeça humana eram congelados com a esperança de serem ressuscitados no futuro. Até o momento, ainda não houve tentativa de serem revividas.

“Aconteceu uma mudança na ciência da morte naquele tempo, e a ideia de que, de alguma forma, os humanos poderiam derrotar a morte”, explica Troyer. O especialista conta também que ainda não há uma pesquisa revisada que prove que o investimento de milhões no upload de dados do cérebro, ou ainda manter um corpo vivo, valha a pena.

Em 2016, um estudo publicado na revista acadêmica Plos One descobriu que expor um cérebro preservado a sondas químicas e elétricas o faz voltar a funcionar. “Tudo isso é uma aposta do que é possível no futuro. Mas eu não estou convencido de que é possível da maneira que estão descrevendo ou desejando”, completa.

Superando o luto

O caso que aconteceu na Coreia do Sul não foi o único que envolve o luto. Em 2015, Eugenia Kuyda, co-fundadora e CEO da empresa de softwares Replika, sofreu com a perda do seu melhor amigo Roman após um atropelamento em Moscou, na Rússia. A executiva decidiu, então, criar um chatbot treinado com milhares de mensagens de texto trocadas pelos dois ao longo dos anos, resultando em uma versão digital de Roman, que pode conversar com amigos e família.

“Foi muito emocionante. Eu não estava esperando me sentir assim porque eu trabalhei naquele chatbot e sabia como ele foi construído”, relata Kuyda. A experiência lembra bastante um dos episódios da série Black Mirror, que aborda um futuro distópico da tecnologia. Em Be Right Back, de 2013, uma jovem mulher perde o namorado em um acidente de carro e se inscreve em um projeto para que ela possa se comunicar com “ele” de forma digital, graças à inteligência artificial.

Por outro lado, Kuyda conta que o projeto não foi criado para ser comercializado, mas sim como uma forma pessoal de lidar com a perda do melhor amigo. Ela conta que qualquer pessoa que tentar reproduzir o feito vai encontrar uma série de empecilhos e dificuldades, como decidir qual tipo de informação será considerada pública ou privada, ou ainda com quem o chatbot poderá interagir. Isso porque a forma de se conversar com um amigo, por exemplo, não é a mesma com integrantes da família, e Kuyda diz que não há como fazer essa diferenciação.

A criação de uma versão digital de uma pessoa não vai desenvolver novas conversas e nem emitir novas opiniões, mas sim replicar frases e palavras já ditas, basicamente, se encaixando com o bate-papo. “Nós deixamos uma quantidade insana de dados, mas a maioria deles não é pessoal, privada ou baseada em termos de que tipo de pessoa nós somos”, diz Kuyda. Em resposta ao CNET, a executiva diz que é impossível obter dados 100% precisos de uma pessoa, pois atualmente não há alguma tecnologia que possa capturar o que está acontecendo em nossas mentes.

Sendo assim, a coleta de dados acaba sendo a maior barreira para criar algum tipo de software que represente uma pessoa após o falecimento. Parte disso acontece porque a maioria dos conteúdos postados online são de uma empresa, passando a pertencer à plataforma. Com isso, se um dia a companhia fechar, os dados vão embora junto com ela. Para Troyer, a tecnologia de memória não tende a sobreviver ao tempo.

Imagem: Reprodução

Cérebro fresco

A startup Nectome vem se dedicando à preservação do cérebro, pensando na possível extração da memória após a morte. Para que isso aconteça, no entanto, o órgão precisa estar “fresco”, o que significaria que a morte teria que acontecer por uma eutanásia.

O objetivo da startup é conduzir os testes com voluntários que estejam em estado terminal de alguma doença e que permitam o suicídio assistido por médicos. Até o momento a Nectome coletou US$ 10 mil reembolsáveis para uma lista de espera para o procedimento, caso um dia a oportunidade esteja disponível. Por enquanto, a companhia ainda precisa se esforçar em ensaios clínicos.

A startup já arrecadou um milhão de dólares em financiamento e vinha colaborando com um neurocientista do MIT. Porém, a publicação da história gerou muita polêmica negativa de cientistas e especialistas em ética, e o MIT encerrou o seu contrato com a startup. A repercussão afirmou que o projeto da empresa não é possível de ser realizado. 

Veja a declaração feita pelo MIT na época:

“A neurociência não é suficientemente avançada ao ponto de sabermos se um método de preservação do cérebro é o suficiente para preservar diferentes tipos de biomoléculas relacionadas à memória e à mente. Também não se sabe se é possível recriar a consciência de uma pessoa”, disse a nota ainda em 2018.

Eternização com a realidade aumentada

Enquanto alguns pensam em extrair a mente de um cérebro, outras empresas optam por uma “ressurreição” mais simples, mas não menos invasiva. A empresa Augmented Reality, por exemplo, tem como objetivo ajudar pessoas a viverem em um formato digital, transmitindo conhecimento das pessoas de hoje para as futuras gerações.

O fundador e CEO da empresa de computação FlyBits e professor do MIT Media Lab, Hossein Rahnama, vem tentando construir agentes de software que possam agir como herdeiros digitais. “Os Millennials estão criando gigabytes de dados diariamente e nós estamos alcançando um nível de maturidade em que podemos, realmente, criar uma versão digital de nós mesmos”, conta.

Para colocar o projeto em ação, a Augmented Reality alimenta um mecanismo de aprendizado de máquina com emails, fotos e atividades de redes sociais das pessoas, analisando como ela pensa e age. Assim, é possível fornecer uma cópia digital de uma pessoa real, e ela pode interagir via chatbot, vídeo digitalmente editado ou ainda como um robô humanoide.

Falando em humanoides, no laboratório de robótica Intelligent Robotics, da Universidade de Osaka, no Japão, já existem mais de 30 androides parecidos com humanos, inclusive uma versão robótica de Hiroshi Ishiguro, diretor do setor. O cientista vem inovando no campo de pesquisa de interações entre humanos e robôs, estudando a importância de detalhes, como movimentos sutis dos olhos e expressões faciais.

Reprodução: Hiroshi Ishiguro Laboratory, ATR

Quando Ishiguro morrer, segundo o próprio, ele poderá ser substituído pelo seu robô para dar aulas aos seus alunos, mesmo que esta máquina nunca seja realmente ele e nem possa gerar novas ideias. “Nós não podemos transmitir as nossas consciências aos robôs. Compartilhamos, talvez, as memórias. Um robô pode dizer ‘Eu sou Hiroshi Ishiguro’, mas mesmo assim a consciência é independente”, afirma.

Para Ishiguro, no futuro nada disso será parecido com o que vemos na ficção científica. O download de memória, por exemplo, é algo que não vai acontecer, pois simplesmente não é possível. “Precisamos ter diferentes formas de fazer uma cópia de nossos cérebros, mas nós não sabemos ainda como fazer isso”, completa. 

Mãe “reencontra” filha morta graças a realidade virtual

The new astrology (Aeon)

By fetishising mathematical models, economists turned economics into a highly paid pseudoscience

04 April, 2016

Alan Jay Levinovitz is an assistant professor of philosophy and religion at James Madison University in Virginia. His most recent book is The Gluten Lie: And Other Myths About What You Eat (2015).Edited by Sam Haselby

 

What would make economics a better discipline?

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

Transgênicos e hidrelétricas (Estadão); e resposta (JC)

Transgênicos e hidrelétricas

Recentemente cem cientistas que receberam o Prêmio Nobel em várias áreas do conhecimento assinaram um apelo à organização ambiental Greenpeace para que abandone sua campanha, que já dura muitos anos, contra a utilização de culturas transgênicas para a produção de alimentos. Transgênicos são produtos em que são feitas alterações do código genético que lhes dão características especiais, como as de protegê-los de pragas, resistir melhor a períodos de seca, aumentar a produtividade e outros.

José Goldemberg*

15 Agosto 2016 | 05h00

O sucesso do uso de transgênicos é evidente em muitas culturas, como na produção de soja, da qual o Brasil é um exemplo. Contudo, quando se começou a usar produtos transgênicos, objeções foram levantadas, uma vez que as modificações genéticas poderiam ter consequências imprevisíveis. O Greenpeace tornou-se o campeão das campanhas contra o seu uso, que foi banido em vários países.

As objeções iniciais tinham como base dois tipos de consideração: luma, de caráter científico, que foi seriamente investigada por cientistas; el outra, de caráter mais geral, com base no “princípio da precaução”, que nos diz basicamente que cabe ao proponente de um novo produto demonstrar que ele não tem consequências inconvenientes ou perigosas. O “princípio da precaução” tem sido usado para barrar, com maior ou menor sucesso, a introdução de inovações.

Esse princípio tem um forte componente moral e político e tem sido invocado de forma muito variável ao longo do tempo. Por exemplo, ele não foi invocado quando a energia nuclear começou a ser usada, há cerca de 60 anos, para a produção de eletricidade; como resultado, centenas de reatores nucleares foram instalados em muitos países e alguns deles causaram acidentes de grandes proporções. Já no caso de mudanças climáticas que se originaram na ação do homem – consumo de combustíveis fósseis e lançamento na atmosfera dos gases que aquecem o planeta –, ele foi incorporado na Convenção do Clima em 1992 e está levando os países a reduzir o uso desses combustíveis.

A manifestação dos nobelistas argumenta que a experiência mostrou que as preocupações com possíveis consequências negativas dos transgênicos não se justificam e opor-se a eles não faz mais sentido.

Nuns poucos países, o “princípio da precaução” tem sido invocado também para dificultar a instalação de usinas hidrelétricas, tendo em vista que sua construção afeta populações ribeirinhas e tem impactos ambientais. Esse é um problema de fato sério em países com elevada densidade populacional, como a Índia, cujo território é cerca de três vezes menor que o do Brasil e a população, quatro vezes maior. Qualquer usina hidrelétrica na Índia afeta centenas de milhares de pessoas. Não é o caso do Brasil, que tem boa parte de seu território na Amazônia, onde a população é pequena. Ainda assim, a construção de usinas na Amazônia para abastecer as regiões mais populosas e grandes centros industriais no Sudeste tem enfrentado sérias objeções de grupos de ativistas.

A construção de usinas hidrelétricas no passado foi planejada com reservatórios. Quando esses reservatórios não são feitos, a produção de eletricidade varia ao longo do ano. Para evitar isso são construídos lagos artificiais, que armazenam água para os períodos do ano em que chove pouco.

Até recentemente quase toda a eletricidade usada no Brasil era produzida por hidrelétricas com reservatórios, que garantiam o fornecimento durante o ano todo mesmo chovendo pouco. Desde 1990 essa prática foi abandonada por causa das queixas das populações atingidas nas áreas alagadas. As hidrelétricas passaram a ser construídas sem reservatórios – isto é, “a fio d’água” –, usando apenas a água corrente dos rios. É o caso das usinas de Jirau, Santo Antônio e Belo Monte, cujo custo aumentou muito em relação à eletricidade produzida: elas são dimensionadas para o fluxo máximo de águas dos rios, que se dá em alguns meses, e geram muito menos nos meses secos.

Houve nesses casos um superdimensionamento do problema. De modo geral, para cada pessoa afetada pela construção de usinas, mais de cem pessoas são beneficiadas pela eletricidade produzida. Sucede que os poucos milhares de pessoas atingidas vivem em torno da usina e se organizaram para reclamar compensações (em alguns casos são instrumentadas por grupos políticos), ao passo que os beneficiados, que são milhões, vivem longe do local e não são organizados.

Cabe ao poder público avaliar os interesses do total da população, comparar os riscos e prejuízos sofridos por alguns e os benefícios recebidos por muitos. Isso não tem sido feito e o governo federal não tem tido a firmeza de explicar à sociedade onde estão os interesses gerais da Nação.

Isso se verifica também em outras grandes obras públicas, como estradas, portos e infraestruturas em geral. Um exemplo é o Rodoanel Mário Covas, em torno da cidade de São Paulo, cuja construção enfrentou fortes contestações tanto de atingidos pelas obras como de alguns grupos ambientalistas. A firmeza do governo de São Paulo e os esclarecimentos prestados viabilizaram a obra, hoje considerada positiva pela grande maioria: retira dezenas de milhares de caminhões por dia do tráfego urbano de São Paulo e reduz a poluição lançada por eles sobre a população.

O que se aprende neste caso deveria ser aplicado às hidrelétricas da Amazônia, que têm sido contestadas por alguns grupos de ambientalistas não suficientemente informados. Cabe aqui uma ação como a que foi tomada pelos nobelistas em relação aos transgênicos e aceitar hidrelétricas construídas com as melhores exigências técnicas e ambientais, incluindo reservatórios, sem os quais elas se tornam pouco viáveis, abrindo caminho para o uso de outras fontes de energia mais poluentes, como carvão e derivados de petróleo.

*PRESIDENTE DA FAPESP, FOI PRESIDENTE DA CESP


Pesquisador comenta artigo

JC 5485, 19 de agosto de 2016

O professor emérito da UnB, Nagib Nassar, questiona o artigo “Transgênicos e hidrelétricas”, do Estado de S. Paulo, divulgado no Jornal da Ciência na última terça-feira

Leia o comentário abaixo:

Refiro-me ao artigo do professor José Goldemberg, publicado no Estadão e projetado pelo Jornal da Ciência.

Discordo do ilustre cientista a começar por ele dizer que transgênicos são feitos para proteger plantas de pragas. Sabe-se que o único transgênico plantado para essa finalidade no Brasil é o milho Bt. Assim, o professor esqueceu ou fez esquecer que, para essa finalidade, é introduzido na planta um gene produtor de toxina mata insetos e, consequentemente, a planta passa a funcionar como um inseticida! 

A toxina Bt, assim como mata insetos, intoxica o próprio ser humano. Frequentemente é citado na literatura o alto risco, inclusive fatal, para o indivíduo. Um exemplo dessas variedades de milho Bt é a variedade milho MO 810: proibida para uso humano pelo próprio país produtor, pela França, Alemanha, Inglaterra e outros países europeus. Infelizmente, a variedade é autorizada no Brasil e quem autorizou não se preocupou em nos fazer de simples cobaias! Em países pobres da África foi rejeitado até como presente. A Zâmbia preferiu ver seu povo sofrer de fome a morrer envenenado! Além de matar insetos invasores, a toxina Bt mata insetos úteis, como abelha de mel e outros polinizadores necessários para que a planta formar frutas.

Quando esse tipo de transgênico morre, ao final de estação de crescimento, suas raízes deixam para o solo resíduos tóxicos que matam bactérias fixadoras do nitrogênio e transformam o solo em um ambiente envenenado para o crescimento da bactéria fixadora do Azoto, que forma fertilizante. Assim, fica impedido o crescimento de qualquer cultura leguminosa. O fabricante desse transgênico gasta milhões de reais com todos os tipos de propagandas, em todas as formas e todos os níveis: o resultado é o mais alto nível o custo das sementes transgênicas, que chega a ser 130 vezes mais cara do que o preço normal. Os pequenos agricultores enganados e iludidos pela propaganda, quando não podem pagar dívidas, correm para um destino trágico: o suicídio. Há muitos casos conhecidos da Índia, que chegou a registrar, em apenas um ano, 180 mortos.

É bom um físico falar sobre hidrelétricas, mas é questionável que se afirme dogmaticamente sobre transgênicos. E por que ele escolheu transgênicos para associá-los às hidrelétricas? Será como uma fachada que esconde o mal dos transgênicos? Isto me lembra do manifesto assinado por cem ganhadores de Nobel em favor de transgênicos escondendo atrás o arroz dourado. Entre esses ganhadores de Nobel, físicos, químicos, até letras e, além de tudo, três mortos!

Lembro-me também de um cientista distante da área  que foi há dez anos à Câmara de Deputados com argumentos e pedidos para a liberação da soja transgênica, e não pelos resultados científicos, que nunca foram apresentados e nem existiam, mas para não prejudicar agricultores que contrabandeavam soja.

Nagib Nassar

Professor emérito da Universidade de Brasília

Presidente fundador da fundação  FUNAGIB (www.funagib.geneconserve.pro.br)

‘Estudos de neurociência superaram a psicanálise’, diz pesquisador brasileiro (Folha de S.Paulo)

Juliana Cunha, 18.06.2016

Com 60 anos de carreira, 22.794 citações em periódicos, 60 premiações e 710 artigos publicados, Ivan Izquierdo, 78, é o neurocientista mais citado e um dos mais respeitados da América Latina. Nascido na Argentina, ele mora no Brasil há 40 anos e foi naturalizado brasileiro em 1981. Hoje coordena o Centro de Memória do Instituto do Cérebro da PUC-RS.

Suas pesquisas ajudaram a entender os diferentes tipos de memória e a desmistificar a ideia de que áreas específicas do cérebro se dedicariam de maneira exclusiva a um tipo de atividade.

Ele falou à Folha durante o Congresso Mundial do Cérebro, Comportamento e Emoções, que aconteceu esta semana, em Buenos Aires. Izquierdo foi o homenageado desta edição do congresso.

Na entrevista, o cientista fala sobre a utilidade de memórias traumáticas, sua descrença em métodos que prometem apagar lembranças e diz que a psicanálise foi superada pelos estudos de neurociência e funciona hoje como mero exercício estético.

Bruno Todeschini
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires

*

Folha – É possível apagar memórias?
Ivan Izquierdo – É possível evitar que uma memória se expresse, isso sim. É normal, é humano, inclusive, evitar a expressão de certas lembranças. A falta de uso de uma determinada memória implica em desuso daquela sinapse, que aos poucos se atrofia.

Fora disso, não dá. Não existe uma técnica para escolher lembranças e então apagá-las, até porque a mesma informação é salva várias vezes no cérebro, por um mecanismo que chamamos de plasticidade. Quando se fala em apagamento de memórias é pirotecnia, são coisas midiáticas e cinematográficas.

O senhor trabalha bastante com memória do medo. Não apagá-las é uma pena ou algo a ser comemorado?
A memória do medo é o que nos mantém vivos. É a que pode ser acessada mais rapidamente e é a mais útil. Toda vez que você passa por uma situação de ameaça, a informação fundamental que o cérebro precisa guardar é que aquilo é perigoso. As pessoas querem apagar memórias de medo porque muitas vezes são desconfortáveis, mas, se não estivessem ali, nos colocaríamos em situações ruins.

Claro que esse processo causa enorme estresse. Para me locomover numa cidade, meu cérebro aciona inúmeras memórias de medo. Entre tê-las e não tê-las, prefiro tê-las, foram elas que me trouxeram até aqui, mas se pudermos reduzir nossa exposição a riscos, melhor. O problema muitas vezes é o estímulo, não a resposta do medo.

Mas algumas memórias de medo são paralisantes, e podem ser mais arriscadas do que a situação que evitam. Como lidar com elas?
Antes parado do que morto. O cérebro atua para nos preservar, essa é a prioridade. Claro que esse mecanismo é sujeito a falhas. Se entendemos que a resposta a uma memória de medo é exagerada, podemos tentar fazer com que o cérebro ressignifique um estímulo. É possível, por exemplo, expor o paciente repetidas vezes aos estímulos que criaram aquela memória, mas sem o trauma. Isso dissocia a experiência do medo.

Isso não seria parecido com o que Freud tentava fazer com as fobias?
Sim, Freud foi um dos primeiros a usar a extinção no tratamento de fobias, embora ele não acreditasse exatamente em extinção. Com a extinção, a memória continua, não é apagada, mas o trauma não está mais lá.

Mas muitos neurocientistas consideram Freud datado.
Toda teoria envelhece. Freud é uma grande referência, deu contribuições importantes. Mas a psicanálise foi superada pelos estudos em neurociência, é coisa de quando não tínhamos condições de fazer testes, ver o que acontecia no cérebro. Hoje a pessoa vai me falar em inconsciente? Onde fica? Sou cientista, não posso acreditar em algo só porque é interessante.

Para mim, a psicanálise hoje é um exercício estético, não um tratamento de saúde. Se a pessoa gosta, tudo bem, não faz mal, mas é uma pena quando alguém que tem um problema real que poderia ser tratado deixa de buscar um tratamento médico achando que psicanálise seria uma alternativa.

E outros tipos de análise que não a freudiana?
Terapia cognitiva, seguramente. Há formas de fazer o sujeito mudar sua resposta a um estímulo.

O senhor veio para o Brasil com a ditadura na Argentina. Agora, vivemos um processo no Brasil que alguns chamam de golpe, é uma memória em disputa. O que o senhor acha disso enquanto cientista?
Eu vim por conta de uma ameaça. Não considero um golpe, mas é um processo muito esperto. Mudar uma palavra ressignifica toda uma memória. Há de fato uma disputa de como essa memória coletiva vai ser construída. A esquerda usa o termo golpe para evocar memórias de medo de um país que já passou por um golpe. Conforme essa palavra é repetida, isso cria um efeito poderoso. Ainda não sabemos como essa memória será consolidada, mas a estratégia é muito esperta.

A jornalista JULIANA CUNHA viajou a convite do Congresso Mundial do Cérebro, Comportamento e Emoções

Curtailing global warming with bioengineering? Iron fertilization won’t work in much of Pacific (Science Daily)

Earth’s own experiments during ice ages showed little effect

Date:
May 16, 2016
Source:
The Earth Institute at Columbia University
Summary:
Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

With the right mix of nutrients, phytoplankton grow quickly, creating blooms visible from space. This image, created from MODIS data, shows a phytoplankton bloom off New Zealand. Credit: Robert Simmon and Jesse Allen/NASA

Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

The results are important today, because as groups search for ways to combat climate change, some are exploring fertilizing the oceans with iron as a solution.

Algae absorb carbon dioxide (CO2), a greenhouse gas that contributes to global warming. Proponents of iron fertilization argue that adding iron to the oceans would fuel the growth of algae, which would absorb more CO2 and sink it to the ocean floor. The most promising ocean regions are those high in nutrients but low in chlorophyll, a sign that algae aren’t as productive as they could be. The Southern Ocean, the North Pacific, and the equatorial Pacific all fit that description. What’s missing, proponents say, is enough iron.

The new study, published this week in the Proceedings of the National Academy of Sciences, adds to growing evidence, however, that iron fertilization might not work in the equatorial Pacific as suggested.

Essentially, earth has already run its own large-scale iron fertilization experiments. During the ice ages, nearly three times more airborne iron blew into the equatorial Pacific than during non-glacial periods, but the new study shows that that increase didn’t affect biological productivity. At some points, as levels of iron-bearing dust increased, productivity actually decreased.

What matters instead in the equatorial Pacific is how iron and other nutrients are stirred up from below by upwelling fueled by ocean circulation, said lead author Gisela Winckler, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. The study found seven to 100 times more iron was supplied from the equatorial undercurrent than from airborne dust at sites spread across the equatorial Pacific. The authors write that although all of the nutrients might not be used immediately, they are used up over time, so the biological pump is already operating at full efficiency.

“Capturing carbon dioxide is what it’s all about: does iron raining in with airborne dust drive the capture of atmospheric CO2? We found that it doesn’t, at least not in the equatorial Pacific,” Winckler said.

The new findings don’t rule out iron fertilization elsewhere. Winckler and coauthor Robert Anderson of Lamont-Doherty Earth Observatory are involved in ongoing research that is exploring the effects of iron from dust on the Southern Ocean, where airborne dust supplies a larger share of the iron reaching the surface.

The PNAS paper follows another paper Winckler and Anderson coauthored earlier this year in Nature with Lamont graduate student Kassandra Costa looking at the biological response to iron in the equatorial Pacific during just the last glacial maximum, some 20,000 years ago. The new paper expands that study from a snapshot in time to a time series across the past 500,000 years. It confirms that Costa’s finding, that iron fertilization had no effect then, fit a pattern that extends across the past five glacial periods.

To gauge how productive the algae were, the scientists in the PNAS paper used deep- sea sediment cores from three locations in the equatorial Pacific that captured 500,000 years of ocean history. They tested along those cores for barium, a measure of how much organic matter is exported to the sea floor at each point in time, and for opal, a silicate mineral that comes from diatoms. Measures of thorium-232 reflected the amount of dust that blew in from land at each point in time.

“Neither natural variability of iron sources in the past nor purposeful addition of iron to equatorial Pacific surface water today, proposed as a mechanism for mitigating the anthropogenic increase in atmospheric CO2 inventory, would have a significant impact,” the authors concluded.

Past experiments with iron fertilization have had mixed results. The European Iron Fertilization Experiment (EIFEX) in 2004, for example, added iron in the Southern Ocean and was able to produce a burst of diatoms, which captured CO2 in their organic tissue and sank to the ocean floor. However, the German-Indian LOHAFEX project in 2009 experimented in a nearby location in the South Atlantic and found few diatoms. Instead, most of its algae were eaten up by tiny marine creatures, passing CO2 into the food chain rather than sinking it. In the LOHAFEX case, the scientists determined that another nutrient that diatoms need — silicic acid — was lacking.

The Intergovernmental Panel on Climate Change (IPCC) cautiously discusses iron fertilization in its latest report on climate change mitigation. It warns of potential risks, including the impact that higher productivity in one area may have on nutrients needed by marine life downstream, and the potential for expanding low-oxygen zones, increasing acidification of the deep ocean, and increasing nitrous oxide, a greenhouse gas more potent than CO2.

“While it is well recognized that atmospheric dust plays a significant role in the climate system by changing planetary albedo, the study by Winckler et al. convincingly shows that dust and its associated iron content is not a key player in regulating the oceanic sequestration of CO2 in the equatorial Pacific on large spatial and temporal scales,” said Stephanie Kienast, a marine geologist and paleoceanographer at Dalhousie University who was not involved in the study. “The classic paradigm of ocean fertilization by iron during dustier glacials can thus be rejected for the equatorial Pacific, similar to the Northwest Pacific.”


Journal Reference:

  1. Gisela Winckler, Robert F. Anderson, Samuel L. Jaccard, and Franco Marcantonio. Ocean dynamics, not dust, have controlled equatorial Pacific productivity over the past 500,000 yearsPNAS, May 16, 2016 DOI: 10.1073/pnas.1600616113

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

If The UAE Builds A Mountain Will It Actually Bring More Rain? (Vocativ)

You’re not the only one who thinks constructing a rain-inducing mountain in the desert is a bonkers idea

May 03, 2016 at 6:22 PM ET

Photo Illustration: R. A. Di ISO

The United Arab Emirates wants to build a mountain so the nation can control the weather—but some experts are skeptical about the effectiveness of this project, which may sound more like a James Bond villain’s diabolical plan than a solution to drought.

The actual construction of a mountain isn’t beyond the engineering prowess of the UAE. The small country on the Arabian Peninsula has pulled off grandiose environmental projects before, like the artificial Palm Islands off the coast of Dubai and an indoor ski hill in the Mall of the Emirates. But the scientific purpose of the mountain is questionable.

The UAE’s National Center for Meteorology and Seismology (NCMS) is currently collaborating with the U.S.-based University Corporation for Atmospheric Research (UCAR) for the first planning phase of the ambitious project, according to Arabian Business. The UAE government gave the two groups $400,000 in funding to determine whether they can bring more rain to the region by constructing a mountain that will foster better cloud-seeding.

Last week the NCMS revealed that the UAE spent $588,000 on cloud-seeding in 2015. Throughout the year, 186 flights dispersed potassium chloride, sodium chloride and magnesium into clouds—a process that can trigger precipitation. Now, the UAE is hoping they can enhance the chemical process by forcing air up around the artificial mountain, creating clouds that can be seeded more easily and efficiently.

“What we are looking at is basically evaluating the effects on weather through the type of mountain, how high it should be and how the slopes should be,” NCAR lead researcher Roelof Bruintjes told Arabian Business. “We will have a report of the first phase this summer as an initial step.”

But some scientists don’t expect NCAR’s research will lead to a rain-inducing alp. “I really doubt that it would work,” Raymond Pierrehumbert, a professor of physics at the University of Oxford told Vocativ. “You’d need to build a long ridge, not just a cone, otherwise the air would just go around. Even if you could do that, mountains cause local enhanced rain on the upslope side, but not much persistent cloud downwind, and if you need cloud seeding to get even the upslope rain, it’s really unlikely to work as there is very little evidence that cloud seeding produces much rainfall.”

Pierrehumbert, who specializes in geophysics and climate change, believes the regional environment would make the project especially difficult. “UAE is a desert because of the wind patterns arising from global atmospheric circulations, and any mountain they build is not going to alter those,” he said. 

Pierrehumbert concedes that NCAR is a respectable organization that will be able to use the “small amount of money to research the problem.” He thinks some good scientific study will come of the effort—perhaps helping to determine why a hot, humid area bordered by the ocean receives so little rainfall.

But he believes the minimal sum should go into another project: “They’d be way better off putting the money into solar-powered desalination plants.”

If the project doesn’t work out, at least wealthy Emirates have a 125,000-square-foot indoor snow park to look forward to in 2018.

God of Thunder (NPR)

October 17, 201411:09 AM ET

In 1904, Charles Hatfield claimed he could turn around the Southern California drought. Little did he know, he was going to get much, much more water than he bargained for.

GLYNN WASHINGTON, HOST:

From PRX and NPR, welcome back to SNAP JUDGMENT the Presto episode. Today we’re calling on mysterious forces and we’re going to strap on the SNAP JUDGMENT time machine. Our own Eliza Smith takes the controls and spins the dial back 100 years into the past.

ELIZA SMITH, BYLINE: California, 1904. In the fields, oranges dry in their rinds. In the ‘burbs, lawns yellow. Poppies wilt on the hillsides. Meanwhile, Charles Hatfield sits at a desk in his father’s Los Angeles sewing machine business. His dad wants him to take over someday, but Charlie doesn’t want to spend the rest of his life knocking on doors and convincing housewives to buy his bobbins and thread. Charlie doesn’t look like the kind of guy who changes the world. He’s impossibly thin with a vanishing patch of mousy hair. He always wears the same drab tweed suit. But he thinks to himself just maybe he can quench the Southland’s thirst. So when he punches out his timecard, he doesn’t go home for dinner. Instead, he sneaks off to the Los Angeles Public Library and pores over stacks of books. He reads about shamans who believed that fumes from a pyre of herbs and alcohols could force rain from the sky. He reads modern texts too, about the pseudoscience of pluvo culture – rainmaking, the theory that explosives and pyrotechnics could crack the clouds. Charlie conducts his first weather experiment on his family ranch, just northeast of Los Angeles in the city of Pasadena. One night he pulls his youngest brother, Paul, out of bed to keep watch with a shotgun as he climbs atop a windmill, pours a cocktail of chemicals into a shallow pan and then waits.

He doesn’t have a burner or a fan or some hybrid, no – he just waits for the chemicals to evaporate into the clouds. Paul slumped into a slumber long ago and is now leaning against the foundation of the windmill, when the first droplet hits Charlie’s cheek. Then another. And another.

Charlie pulls out his rain gauge and measures .65 inches. It’s enough to convince him he can make rain.

That’s right, Charlie has the power. Word spreads in local papers and one by one, small towns Hemet, Volta, Gustine, Newman, Crows Landing, Patterson come to him begging for rain. And wherever Charlie goes, rain seems to follow. After he gives their town seven more inches of water than his contract stipulated, the Hemet News raves, Mr. Hatfield is proving beyond doubt that rain can be produced.

Within weeks he’s signing contracts with towns from the Pacific Coast to the Mississippi. Of course, there are doubters who claim that he tracks the weather, who claim he’s a fool chasing his luck.

But then Charlie gets an invitation to prove himself. San Diego, a major city, is starting to talk water rations and they call on him. Of course, most of the city councilmen are dubious of Charlie’s charlatan claims. But still, cows are keeling over in their pastures and farmers are worrying over dying crops. It won’t hurt to hire him. They reason if Charlie Hatfield can fill San Diego’s biggest reservoir, Morena Dam, with 10 billion gallons of water, he’ll earn himself $10,000. If he can’t, well then he’ll just walk away and the city will laugh the whole thing off.

One councilman jokes…

UNIDENTIFIED MAN #1: It’s heads – the city wins. Tails – Hatfield loses.

SMITH: Charlie and Paul set up camp in the remote hills surrounding the Morena Reservoir. This time they work for weeks building several towers. This is to be Charlie’s biggest rain yet. When visitors come to observe his experiments, Charlie turns his back to them, hiding his notebooks and chemicals and Paul fingers the trigger on his trusty rifle. And soon enough it’s pouring. Winds reach record speeds of over 60 miles per hour. But that isn’t good enough – Charlie needs the legitimacy a satisfied San Diego can grant him. And so he works non-stop dodging lightning bolts, relishing thunderclaps. He doesn’t care that he’s soaked to the bone – he can wield weather. The water downs power lines, floods streets, rips up rail tracks.

A Mission Valley man who had to be rescued by a row boat as he clung to a scrap of lumber wraps himself in a towel and shivers as he suggests…

UNIDENTIFIED MAN #2: Let’s pay Hatfield $100,000 to quit.

SMITH: But Charlie isn’t quitting. The rain comes down harder and harder. Dams and reservoirs across the county explode and the flood devastates every farm, every house in its wake. One winemaker is surfacing from the protection of his cellar when he spies a wave twice the height of a telephone pole tearing down his street. He grabs his wife and they run as fast as they can, only to turn and watch their house washed downstream.

And yet, Charlie smiles as he surveys his success. The Morena Reservoir is full. He grabs Paul and the two leave their camp to march the 50 odd miles to City Hall. He expects the indebted populist to kiss his mud-covered shoes. Instead, he’s met with glares and threats. By the time Charlie and Paul reach San Diego’s city center, they’ve stopped answering to the name Hatfield. They call themselves Benson to avoid bodily harm.

Still, when he stands before the city councilman, Charlie declares his operations successful and demands his payment. The men glower at him.

San Diego is in ruins and worst of all – they’ve got blood on their hands. The flood drowned more than 50 people. It also destroyed homes, farms, telephone lines, railroads, streets, highways and bridges. San Diegans file millions of dollars in claims but Charlie doesn’t budge. He folds his arms across his chest, holds his head high and proclaims, the time is coming when drought will overtake this portion of the state. It will be then that you call for my services again.

So the city councilman tells Charlie that if he’s sure he made it rain, they’ll give him his $10,000 – he’ll just have to take full responsibility for the flood. Charlie grits his teeth and tells them, it was coincidence. It rained because Mother Nature made it so. I am no rainmaker.

And then Charlie disappears. He goes on selling sewing machines and keeping quiet.

WASHINGTON: I’ll tell you what, California these days could use a little Charlie Hatfield. Big thanks to Eliza Smith for sharing that story and thanks as well to Leon Morimoto for sound design. Mischief managed – you’ve just gotten to the other side by means of other ways.

If you missed any part of this show, no need for a rampage – head on over to snapjudgment.org. There you’ll find the award-winning podcast – Mark, what award did we win? Movies, pictures, stuff. Amazing stories await. Get in on the conversation. SNAP JUDGMENT’s on Facebook, Twitter @snapjudgment.

Did you ever wind up in the slithering sitting room when you’re supposed to be in Gryffindor’s parlor? Well, me neither, but I’m sure it’s nothing like wandering the halls of the Corporation for Public Broadcasting. Completely different, but many thanks to them. PRX, Public Radio Exchange, hosts a similar annual Quidditch championships but instead of brooms they ride radios. Not quite the same visual effect, but it’s good clean fun all the same – prx.org.

WBEZ in Chicago has tricks up their sleeve and you may have reckoned that this is not the news. No way is this the news. In fact, if you’d just thrown that book with Voldemort trapped in it, thrown it in the fire, been done with the nonsense – and you would still not be as far away from the news as this is. But this is NPR.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

No escaping the Blue Marble (The Conversation)

August 20, 2015 6.46pm EDT

The Earth seen from Apollo, a photo now known as the “Blue Marble”. NASA

It is often said that the first full image of the Earth, “Blue Marble”, taken by the Apollo 17 space mission in December 1972, revealed Earth to be precious, fragile and protected only by a wafer-thin atmospheric layer. It reinforced the imperative for better stewardship of our “only home”.

But there was another way of seeing the Earth revealed by those photographs. For some the image showed the Earth as a total object, a knowable system, and validated the belief that the planet is there to be used for our own ends.

In this way, the “Blue Marble” image was not a break from technological thinking but its affirmation. A few years earlier, reflecting on the spiritual consequences of space flight, the theologian Paul Tillich wrote of how the possibility of looking down at the Earth gives rise to “a kind of estrangement between man and earth” so that the Earth is seen as a totally calculable material body.

For some, by objectifying the planet this way the Apollo 17 photograph legitimised the Earth as a domain of technological manipulation, a domain from which any unknowable and unanalysable element has been banished. It prompts the idea that the Earth as a whole could be subject to regulation.

This metaphysical possibility is today a physical reality in work now being carried out on geoengineering – technologies aimed at deliberate, large-scale intervention in the climate system designed to counter global warming or offset some of its effects.

While some proposed schemes are modest and relatively benign, the more ambitious ones – each now with a substantial scientific-commercial constituency – would see humanity mobilising its technological power to seize control of the climate system. And because the climate system cannot be separated from the rest of the Earth System, that means regulating the planet, probably in perpetuity.

Dreams of escape

Geoengineering is often referred to as Plan B, one we should be ready to deploy because Plan A, cutting global greenhouse gas emissions, seems unlikely to be implemented in time. Others are now working on what might be called Plan C. It was announced last year in The Times:

British scientists and architects are working on plans for a “living spaceship” like an interstellar Noah’s Ark that will launch in 100 years’ time to carry humans away from a dying Earth.

This version of Plan C is known as Project Persephone, which is curious as Persephone in Greek mythology was the queen of the dead. The project’s goal is to build “prototype exovivaria – closed ecosystems inside satellites, to be maintained from Earth telebotically, and democratically governed by a global community.”

NASA and DARPA, the US Defense Department’s advanced technologies agency, are also developing a “worldship” designed to take a multi-generational community of humans beyond the solar system.

Paul Tillich noticed the intoxicating appeal that space travel holds for certain kinds of people. Those first space flights became symbols of a new ideal of human existence, “the image of the man who looks down at the earth, not from heaven, but from a cosmic sphere above the earth”. A more common reaction to Project Persephone is summed up by a reader of the Daily Mail: “Only the ‘elite’ will go. The rest of us will be left to die.”

Perhaps being left to die on the home planet would be a more welcome fate. Imagine being trapped on this “exovivarium”, a self-contained world in which exported nature becomes a tool for human survival; a world where there is no night and day; no seasons; no mountains, streams, oceans or bald eagles; no ice, storms or winds; no sky; no sunrise; a closed world whose occupants would work to keep alive by simulation the archetypal habits of life on Earth.

Into the endless void

What kind of person imagines himself or herself living in such a world? What kind of being, after some decades, would such a post-terrestrial realm create? What kind of children would be bred there?

According to Project Persephone’s sociologist, Steve Fuller: “If the Earth ends up a no-go zone for human beings [sic] due to climate change or nuclear or biological warfare, we have to preserve human civilisation.”

Why would we have to preserve human civilisation? What is the value of a civilisation if not to raise human beings to a higher level of intellectual sophistication and moral responsibility? What is a civilisation worth if it cannot protect the natural conditions that gave birth to it?

Those who blast off leaving behind a ruined Earth would carry into space a fallen civilisation. As the Earth receded into the all-consuming blackness those who looked back on it would be the beings who had shirked their most primordial responsibility, beings corroded by nostalgia and survivor guilt.

He’s now mostly forgotten, but in the 1950s and 1960s the Swedish poet Harry Martinson was famous for his haunting epic poem Aniara, which told the story of a spaceship carrying a community of several thousand humans out into space escaping an Earth devastated by nuclear conflagration. At the end of the epic the spaceship’s controller laments the failure to create a new Eden:

“I had meant to make them an Edenic place,

but since we left the one we had destroyed

our only home became the night of space

where no god heard us in the endless void.”

So from the cruel fantasy of Plan C we are obliged to return to Plan A, and do all we can to slow the geological clock that has ticked over into the Anthropocene. If, on this Earthen beast provoked, a return to the halcyon days of an undisturbed climate is no longer possible, at least we can resolve to calm the agitations of “the wakened giant” and so make this new and unwanted epoch one in which humans can survive.

Geoengineering proposal may backfire: Ocean pipes ‘not cool,’ would end up warming climate (Science Daily)

Date: March 19, 2015

Source: Carnegie Institution

Summary: There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits. One idea involves using ocean pipes to facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters. New research shows that these pipes could actually increase global warming quite drastically.


To combat global climate change caused by greenhouse gases, alternative energy sources and other types of environmental recourse actions are needed. There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits.A new study from a group of Carnegie scientists determines that these types of pipes could actually increase global warming quite drastically. It is published in Environmental Research Letters.

One proposed strategy–called Ocean Thermal Energy Conversion, or OTEC–involves using the temperature difference between deeper and shallower water to power a heat engine and produce clean electricity. A second proposal is to move carbon from the upper ocean down into the deep, where it wouldn’t interact with the atmosphere. Another idea, and the focus of this particular study, proposes that ocean pipes could facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters.

“Our prediction going into the study was that vertical ocean pipes would effectively cool the Earth and remain effective for many centuries,” said Ken Caldeira, one of the three co-authors.

The team, which also included lead author Lester Kwiatkowski as well as Katharine Ricke, configured a model to test this idea and what they found surprised them. The model mimicked the ocean-water movement of ocean pipes if they were applied globally reaching to a depth of about a kilometer (just over half a mile). The model simulated the motion created by an idealized version of ocean pipes, not specific pipes. As such the model does not include real spacing of pipes, nor does it calculate how much energy they would require.

Their simulations showed that while global temperatures could be cooled by ocean pipe systems in the short term, warming would actually start to increase just 50 years after the pipes go into use. Their model showed that vertical movement of ocean water resulted in a decrease of clouds over the ocean and a loss of sea-ice.

Colder air is denser than warm air. Because of this, the air over the ocean surface that has been cooled by water from the depths has a higher atmospheric pressure than the air over land. The cool air over the ocean sinks downward reducing cloud formation over the ocean. Since more of the planet is covered with water than land, this would result in less cloud cover overall, which means that more of the Sun’s rays are absorbed by Earth, rather than being reflected back into space by clouds.

Water mixing caused by ocean pipes would also bring sea ice into contact with warmer waters, resulting in melting. What’s more, this would further decrease the reflection of the Sun’s radiation, which bounces off ice as well as clouds.

After 60 years, the pipes would cause an increase in global temperature of up to 1.2 degrees Celsius (2.2degrees Fahrenheit). Over several centuries, the pipes put the Earth on a warming trend towards a temperature increase of 8.5 degrees Celsius (15.3 degrees Fahrenheit).

“I cannot envisage any scenario in which a large scale global implementation of ocean pipes would be advisable,” Kwiatkowski said. “In fact, our study shows it could exacerbate long-term warming and is therefore highly inadvisable at global scales.”

The authors do say, however, that ocean pipes might be useful on a small scale to help aerate ocean dead zones.


Journal Reference:

  1. Lester Kwiatkowski, Katharine L Ricke and Ken Caldeira. Atmospheric consequences of disruption of the ocean thermoclineEnvironmental Research Letters, 2015 DOI: 10.1088/1748-9326/10/3/034016

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]

BY GEOFF WEBB, NETIQ

12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera

HENRY FOUNTAIN
DO “NEW YORK TIMES”

18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds (Science Daily)

Date: October 29, 2014

Source: Massachusetts Institute of Technology

Summary: In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter. As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space.

A new spin on virtual reality helps engineers read robots’ minds. Credit: Video screenshot courtesy of Melanie Gonick/MIT

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.

Video: http://www.youtube.com/watch?v=utM9zOYXgUY

Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground (Science Daily)

Date: August 28, 2014

Source: European Association of Geochemistry

Summary: Wally Broeker, the first person to alert the world to global warming, has called for atmospheric carbon dioxide to be captured and stored underground.


Wally Broeker, the first person to alert the world to global warming, has called for atmospheric CO2 to be captured and stored underground. He says that carbon capture, combined with limits on fossil fuel emissions, is the best way to avoid global warming getting out of control over the next fifty years. Professor Broeker (Columbia University, New York) made the call during his presentation to the International Carbon Conference in Reykjavik, Iceland, where 150 scientists are meeting to discuss carbon capture and storage.

He was presenting an analysis which showed that the world has been cooling very slowly, over the last 51 million years, but that human activity is causing a rise in temperature which will lead to problems over the next 100,000 years.

“We have painted ourselves into a tight corner. We can’t reduce our reliance of fossil fuels quickly enough, so we need to look at alternatives.

“One of the best ways to deal with this is likely to be carbon capture — in other words, putting the carbon back where it came from, underground. There has been great progress in capturing carbon from industrial processes, but to really make a difference we need to begin to capture atmospheric CO2. Ideally, we could reach a stage where we could control the levels of CO2 in the atmosphere, like you control your central heating. Continually increasing CO2 levels means that we will need to actively manage CO2 levels in the environment, not just stop more being produced. The technology is proven, it just needs to be brought to a stage where it can be implemented.”

Wally Broeker was speaking at the International Carbon Conference in Reykjavik, where 150 scientists are meeting to discuss how best CO2 can be removed from the atmosphere as part of a programme to reduce global warming.

Meeting co-convener Professor Eric Oelkers (University College London and University of Toulouse) commented: “Capture is now at a crossroads; we have proven methods to store carbon in the Earth but are limited in our ability to capture this carbon directly from the atmosphere. We are very good at capturing carbon from factories and power stations, but because roughly two-thirds of our carbon originates from disperse sources, implementing direct air capture is key to solving this global challenge.”

European Association of Geochemistry. “Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground.” ScienceDaily. ScienceDaily, 28 August 2014. <www.sciencedaily.com/releases/2014/08/140828110915.htm>.

Carbon dioxide ‘sponge’ could ease transition to cleaner energy (Science Daily)

Date: August 10, 2014

Source: American Chemical Society (ACS)

Summary: A plastic sponge that sops up the greenhouse gas carbon dioxide might ease our transition away from polluting fossil fuels to new energy sources like hydrogen. A relative of food container plastics could play a role in President Obama’s plan to cut carbon dioxide emissions. The material might also someday be integrated into power plant smokestacks.


Plastic that soaks up carbon dioxide could someday be used in plant smokestacks.
Credit: American Chemical Society

A sponge-like plastic that sops up the greenhouse gas carbon dioxide (CO2) might ease our transition away from polluting fossil fuels and toward new energy sources, such as hydrogen. The material — a relative of the plastics used in food containers — could play a role in President Obama’s plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future.

The report on the material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place here through Thursday.

“The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well. It’s geared toward function in a real-world environment,” says Andrew Cooper, Ph.D. “In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.”

CO2 adsorbents are most commonly used to remove the greenhouse gas pollutant from smokestacks at power plants where fossil fuels like coal or gas are burned. However, Cooper and his team intend the adsorbent, a microporous organic polymer, for a different application — one that could lead to reduced pollution.

The new material would be a part of an emerging technology called an integrated gasification combined cycle (IGCC), which can convert fossil fuels into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated.

Cooper, who is at the University of Liverpool, says that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, he explains, the adsorbent deflates and releases the CO2­, which they can then collect for storage or convert into useful carbon compounds.

The material, which is a brown, sand-like powder, is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action.

One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether made from plastics or metals or in liquid form — do not always hold up so well, he says. Another advantage of the new adsorbent is its ability to adsorb CO2 without also taking on water vapor, which can clog up other materials and make them less effective. Its low cost also makes the sponge polymer attractive. “Compared to many other adsorbents, they’re cheap,” Cooper says, mostly because the carbon molecules used to make them are inexpensive. “And in principle, they’re highly reusable and have long lifetimes because they’re very robust.”

Cooper also will describe ways to adapt his microporous polymer for use in smokestacks and other exhaust streams. He explains that it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO­2 from power plant exhaust, for instance. Combining two types of scrubbers could make much better adsorbents by harnessing the strengths of each, he explains.

The research was funded by the Engineering and Physical Sciences Research Council and E.ON Energy.

Geoengineering the Earth’s climate sends policy debate down a curious rabbit hole (The Guardian)

Many of the world’s major scientific establishments are discussing the concept of modifying the Earth’s climate to offset global warming

Monday 4 August 2014

Many leading scientific institutions are now looking at proposed ways to engineer the planet's climate to offset the impacts of global warming.

Many leading scientific institutions are now looking at proposed ways to engineer the planet’s climate to offset the impacts of global warming. Photograph: NASA/REUTERS

There’s a bit in Alice’s Adventures in Wonderland where things get “curiouser and curiouser” as the heroine tries to reach a garden at the end of a rat-hole sized corridor that she’s just way too big for.

She drinks a potion and eats a cake with no real clue what the consequences might be. She grows to nine feet tall, shrinks to ten inches high and cries literal floods of frustrated tears.

I spent a couple of days at a symposium in Sydney last week that looked at the moral and ethical issues around the concept of geoengineering the Earth’s climate as a “response” to global warming.

No metaphor is ever quite perfect (climate impacts are no ‘wonderland’), but Alice’s curious experiences down the rabbit hole seem to fit the idea of medicating the globe out of a possible catastrophe.

And yes, the fact that in some quarters geoengineering is now on the table shows how the debate over climate change policy is itself becoming “curiouser and curiouser” still.

It’s tempting too to dismiss ideas like pumping sulphate particles into the atmosphere or making clouds whiter as some sort of surrealist science fiction.

But beyond the curiosity lies actions being countenanced and discussed by some of the world’s leading scientific institutions.

What is geoengineering?

Geoengineering – also known as climate engineering or climate modification – comes in as many flavours as might have been on offer at the Mad Hatter’s Tea Party.

Professor Jim Falk, of the Melbourne Sustainable Society Institute at the University of Melbourne, has a list of more than 40 different techniques that have been suggested.

They generally take two approaches.

Carbon Dioxide Reduction (CDR) is pretty self explanatory. Think tree planting, algae farming, increasing the carbon in soils, fertilising the oceans or capturing emissions from power stations. Anything that cuts the amount of CO2 in the atmosphere.

Solar Radiation Management (SRM) techniques are concepts to try and reduce the amount of solar energy reaching the earth. Think pumping sulphate particles into the atmosphere (this mimics major volcanic eruptions that have a cooling effect on the planet), trying to whiten clouds or more benign ideas like painting roofs white.

Geoengineering on the table

In 2008 an Australian Government–backed research group issued a report on the state-of-play of ocean fertilisation, recording there had been 12 experiments carried out of various kinds with limited to zero evidence of “success”.

This priming of the “biological pump” as its known, promotes the growth of organisms (phytoplankton) that store carbon and then sink to the bottom of the ocean.

The report raised the prospect that larger scale experiments could interfere with the oceanic food chain, create oxygen-depleted “dead zones” (no fish folks), impact on corals and plants and various other unknowns.

The Royal Society – the world’s oldest scientific institution – released a report in 2009, also reviewing various geoengineering technologies.

In 2011, Australian scientists gathered at a geoengineering symposium organised by the Australian Academy of Science and the Australian Academy of Technological Sciences and Engineering.

The London Protocol – a maritime convention relating to dumping at sea – was amended last year to try and regulate attempts at “ocean fertilisation” – where substances, usually iron, are dumped into the ocean to artificially raise the uptake of carbon dioxide.

The latest major United Nations Intergovernmental Panel on Climate Change also addressed the geoengineering issue in several chapters of its latest report. The IPCC summarised geoengineering this way.

CDR methods have biogeochemical and technological limitations to their potential on a global scale. There is insufficient knowledge to quantify how much CO2 emissions could be partially offset by CDR on a century timescale. Modelling indicates that SRM methods, if realizable, have the potential to substantially offset a global temperature rise, but they would also modify the global water cycle, and would not reduce ocean acidification. If SRM were terminated for any reason, there is high confidence that global surface temperatures would rise very rapidly to values consistent with the greenhouse gas forcing. CDR and SRM methods carry side effects and long-term consequences on a global scale.

Towards the end of this year, the US National Academy of Sciences will be publishing a major report on the “technical feasibility” of some geoengineering techniques.

Fighting Fire With Fire

The symposium in Sydney was co-hosted by the University of New South Wales and the Sydney Environment Institute at the University of Sydney (for full disclosure here, they paid my travel costs and one night stay).

Dr Matthew Kearnes, one of the organisers of the workshop from UNSW, told me there was “nervousness among many people about even thinking or talking about geoengineering.” He said:

I would not want to dismiss that nervousness, but this is an agenda that’s now out there and it seems to be gathering steam and credibility in some elite establishments.

Internationally geoengineering tends to be framed pretty narrowly as just a case of technical feasibility, cost and efficacy. Could it be done? What would it cost? How quickly would it work?

We wanted to get a way from the arguments about the pros and cons and instead think much more carefully about what this tells us about the climate change debate more generally.

The symposium covered a range of frankly exhausting philosophical, social and political considerations – each of them jumbo-sized cans full of worms ready to open.

Professor Stephen Gardiner, of the University of Washington, Seattle, pushed for the wider community to think about the ethical and moral consequences of geoengineering. He drew a parallel between the way, he said, that current fossil fuel combustion takes benefits now at the expense of impacts on future generations. Geoengineering risked making the same mistake.

Clive Hamilton’s book Earthmasters notes “in practice any realistic assessment of how the world works must conclude that geoengineering research is virtually certain to reduce incentives to pursue emission reductions”.

Odd advocates

Curiouser still, is that some of the world’s think tanks who shout the loudest that human-caused climate change might not even be a thing, or at least a thing not worth worrying about, are happy to countenance geoengineering as a solution to the problem they think is overblown.

For example, in January this year the Copenhagen Consensus Center, a US-based think tank founded by Danish political scientist Bjorn Lomborg, issued a submission to an Australian Senate inquiry looking at overseas aid and development.

Lomborg’s center has for many years argued that cutting greenhouse gas emissions is too expensive and that action on climate change should have a low-priority compared to other issues around the world.

Lomborg himself says human-caused climate change will not turn into an economic negative until near the end of this century.

Yet Lomborg’s submission told the Australian Senate suggested that every dollar spent on “investigat[ing] the feasibility of planetary cooling through geoengineering technologies” could yield “$1000 of benefits” although this, Lomborg wrote, was a “rough estimate”.

But these investigations, Lomborg submitted, “would serve to better understand risks, costs, and benefits, but also act as an important potential insurance against global warming”.

Engineering another excuse

Several academics I’ve spoken with have voiced fears that the idea of unproven and potentially disastrous geoengineering technologies being an option to shield societies from the impacts of climate change could be used to distract policy makers and the public from addressing the core of the climate change issue – that is, curbing emissions in the first place.

But if the idea of some future nation, or group of nations, or even corporations, some embarking on a major project to modify the Earth’s climate systems leaves you feeling like you’ve fallen down a surreal rabbit hole, then perhaps we should also ask ourselves this.

Since the year 1750, the world has added something in the region of 1,339,000,000,000 tonnes of carbon dioxide (that’s 1.34 trillion tonnes) to the atmosphere from fossil fuel and cement production.

Raising the level of CO2 in the atmosphere by 40 per cent could be seen as accidental geoengineering.

Time to crawl out of the rabbit hole?