Arquivo mensal: agosto 2014

Wittgenstein’s forgotten lesson (Prospect Magazine)

Wittgenstein’s philosophy is at odds with the scientism which dominates our times. Ray Monk explains why his thought is still relevant.

by Ray Monk / July 20, 1999 / Leave a comment

Published in July 1999 issue of Prospect Magazine

Ludwig Wittgenstein is regarded by many, including myself, as the greatest philosopher of this century. His two great works, Tractatus Logico-Philosophicus (1921) and Philosophical Investigations (published posthumously in 1953) have done much to shape subsequent developments in philosophy, especially in the analytic tradition. His charismatic personality has fascinated artists, playwrights, poets, novelists, musicians and even movie-makers, so that his fame has spread far beyond the confines of academic life.

And yet in a sense Wittgenstein’s thought has made very little impression on the intellectual life of this century. As he himself realised, his style of thinking is at odds with the style that dominates our present era. His work is opposed, as he once put it, to “the spirit which informs the vast stream of European and American civilisation in which all of us stand.” Nearly 50 years after his death, we can see, more clearly than ever, that the feeling that he was swimming against the tide was justified. If we wanted a label to describe this tide, we might call it “scientism,” the view that every intelligible question has either a scientific solution or no solution at all. It is against this view that Wittgenstein set his face.

Scientism takes many forms. In the humanities, it takes the form of pretending that philosophy, literature, history, music and art can be studied as if they were sciences, with “researchers” compelled to spell out their “methodologies”—a pretence which has led to huge quantities of bad academic writing, characterised by bogus theorising, spurious specialisation and the development of pseudo-technical vocabularies. Wittgenstein would have looked upon these developments and wept.

There are many questions to which we do not have scientific answers, not because they are deep, impenetrable mysteries, but simply because they are not scientific questions. These include questions about love, art, history, culture, music-all questions, in fact, that relate to the attempt to understand ourselves better. There is a widespread feeling today that the great scandal of our times is that we lack a scientific theory of consciousness. And so there is a great interdisciplinary effort, involving physicists, computer scientists, cognitive psychologists and philosophers, to come up with tenable scientific answers to the questions: what is consciousness? What is the self? One of the leading competitors in this crowded field is the theory advanced by the mathematician Roger Penrose, that a stream of consciousness is an orchestrated sequence of quantum physical events taking place in the brain. Penrose’s theory is that a moment of consciousness is produced by a sub-protein in the brain called a tubulin. The theory is, on Penrose’s own admission, speculative, and it strikes many as being bizarrely implausible. But suppose we discovered that Penrose’s theory was correct, would we, as a result, understand ourselves any better? Is a scientific theory the only kind of understanding?

Well, you might ask, what other kind is there? Wittgenstein’s answer to that, I think, is his greatest, and most neglected, achievement. Although Wittgenstein’s thought underwent changes between his early and his later work, his opposition to scientism was constant. Philosophy, he writes, “is not a theory but an activity.” It strives, not after scientific truth, but after conceptual clarity. In the Tractatus, this clarity is achieved through a correct understanding of the logical form of language, which, once achieved, was destined to remain inexpressible, leading Wittgenstein to compare his own philosophical propositions with a ladder, which is thrown away once it has been used to climb up on.

In his later work, Wittgenstein abandoned the idea of logical form and with it the notion of ineffable truths. The difference between science and philosophy, he now believed, is between two distinct forms of understanding: the theoretical and the non-theoretical. Scientific understanding is given through the construction and testing of hypotheses and theories; philosophical understanding, on the other hand, is resolutely non-theoretical. What we are after in philosophy is “the understanding that consists in seeing connections.”

Non-theoretical understanding is the kind of understanding we have when we say that we understand a poem, a piece of music, a person or even a sentence. Take the case of a child learning her native language. When she begins to understand what is said to her, is it because she has formulated a theory? We can say that if we like—and many linguists and psychologists have said just that—but it is a misleading way of describing what is going on. The criterion we use for saying that a child understands what is said to her is that she behaves appropriately-she shows that she understands the phrase “put this piece of paper in the bin,” for example, by obeying the instruction.

Another example close to Wittgenstein’s heart is that of understanding music. How does one demonstrate an understanding of a piece of music? Well, perhaps by playing it expressively, or by using the right sort of metaphors to describe it. And how does one explain what “expressive playing” is? What is needed, Wittgenstein says, is “a culture”: “If someone is brought up in a particular culture-and then reacts to music in such-and-such a way, you can teach him the use of the phrase ‘expressive playing.’” What is required for this kind of understanding is a form of life, a set of communally shared practices, together with the ability to hear and see the connections made by the practitioners of this form of life.

What is true of music is also true of ordinary language. “Understanding a sentence,” Wittgenstein says in Philosophical Investigations, “is more akin to understanding a theme in music than one may think.” Understanding a sentence, too, requires participation in the form of life, the “language-game,” to which it belongs. The reason computers have no understanding of the sentences they process is not that they lack sufficient neuronal complexity, but that they are not, and cannot be, participants in the culture to which the sentences belong. A sentence does not acquire meaning through the correlation, one to one, of its words with objects in the world; it acquires meaning through the use that is made of it in the communal life of human beings.

All this may sound trivially true. Wittgenstein himself described his work as a “synopsis of trivialities.” But when we are thinking philosophically we are apt to forget these trivialities and thus end up in confusion, imagining, for example, that we will understand ourselves better if we study the quantum behaviour of the sub-atomic particles inside our brains, a belief analogous to the conviction that a study of acoustics will help us understand Beethoven’s music. Why do we need reminding of trivialities? Because we are bewitched into thinking that if we lack a scientific theory of something, we lack any understanding of it.

One of the crucial differences between the method of science and the non-theoretical understanding that is exemplified in music, art, philosophy and ordinary life, is that science aims at a level of generality which necessarily eludes these other forms of understanding. This is why the understanding of people can never be a science. To understand a person is to be able to tell, for example, whether he means what he says or not, whether his expressions of feeling are genuine or feigned. And how does one acquire this sort of understanding? Wittgenstein raises this question at the end of Philosophical Investigations. “Is there,” he asks, “such a thing as ‘expert judgment’ about the genuineness of expressions of feeling?” Yes, he answers, there is.

But the evidence upon which such expert judgments about people are based is “imponderable,” resistant to the general formulation characteristic of science. “Imponderable evidence,” Wittgenstein writes, “includes subtleties of glance, of gesture, of tone. I may recognise a genuine loving look, distinguish it from a pretended one… But I may be quite incapable of describing the difference… If I were a very talented painter I might conceivably represent the genuine and simulated glance in pictures.”

But the fact that we are dealing with imponderables should not mislead us into believing that all claims to understand people are spurious. When Wittgenstein was once discussing his favourite novel, The Brothers Karamazov, with Maurice Drury, Drury said that he found the character of Father Zossima impressive. Of Zossima, Dostoevsky writes: “It was said that… he had absorbed so many secrets, sorrows, and avowals into his soul that in the end he had acquired so fine a perception that he could tell at the first glance from the face of a stranger what he had come for, what he wanted and what kind of torment racked his conscience.” “Yes,” said Wittgenstein, “there really have been people like that, who could see directly into the souls of other people and advise them.”

“An inner process stands in need of outward criteria,” runs one of the most often quoted aphorisms of Philosophical Investigations. It is less often realised what emphasis Wittgenstein placed on the need for sensitive perception of those “outward criteria” in all their imponderability. And where does one find such acute sensitivity? Not, typically, in the works of psychologists, but in those of the great artists, musicians and novelists. “People nowadays,” Wittgenstein writes in Culture and Value, “think that scientists exist to instruct them, poets, musicians, etc. to give them pleasure. The idea that these have something to teach them-that does not occur to them.”

At a time like this, when the humanities are institutionally obliged to pretend to be sciences, we need more than ever the lessons about understanding that Wittgenstein—and the arts—have to teach us.

The original Eskimos have no living descendants, say scientists (The Christian Science Monitor)

By Charles Choi, LiveScience Contributing Writer / August 28, 2014

Ancient human DNA is shedding light on the peopling of the Arctic region of the Americas, revealing that the first people there did not leave any genetic descendants in the New World, unlike previously thought.

The study’s researchers suggest the first group of people in the New World Arctic may have lived in near-isolation for more than 4,000 years because of a mindset that eschewed adopting new ideas. It remains a mystery why they ultimately died off, they added.

The first people in the Arctic of the Americas may have arrived about 6,000 years ago, crossing the Bering Strait from Siberia. The area was the last region of the New World that humans populated due to its harsh and frigid nature.

But the details of how the New World Arctic was peopled remain a mystery because the region’s vast size and remoteness make it difficult to conduct research there. For example, it was unclear whether the Inuit people living there today and the cultures that preceded them were genetically the same people, or independent groups.

The scientists analyzed DNA from bone, teeth and hair samples collected from the remains of 169 ancient humans from Arctic Siberia, Alaska, Canada and Greenland. They also sequenced the complete genomes of seven modern-day people from the region for comparison.

Previous research suggested people in the New World Arctic could be divided into two distinct groups — the Paleo-Eskimos, who showed up first, and the Neo-Eskimos, who got there nearly 4,000 years later. [In Photos: Life in the Arctic region of the Americas]

The early Paleo-Eskimo people include the Pre-Dorset and Saqqaq cultures, who mostly hunted reindeer and musk ox. When a particularly cold period began about 800 B.C., the Late Paleo-Eskimo people known as the Dorset culture emerged. The Dorset people had a more marine lifestyle, involving whaling and seal hunting. Their culture is divided into three phases, altogether lasting about 2,100 years.

“One may almost say kind of jokingly or informally that the Dorsets were the hobbits of the Eastern Arctic, a very strange and very conservative people that we are just now getting to know a little bit,” said study co-author William Fitzhugh, an anthropologist at the Smithsonian Institution’s National Museum of Natural History in Washington, D.C.

The Dorset culture ended sometime between 1150 and 1350 A.D., getting rapidly replaced after the sudden appearance of Neo-Eskimo whale-hunters known as the Thule culture. These newcomers from the Bering Strait region brought new technology from Asia, including complex weapons such as sinew-backed bows and more effective means of transportation such as dog sleds. The Thule “pioneered the hunting of large whales for the first time ever in, I guess, maybe anywhere in the world,” Fitzhugh said.

Modern Inuit cultures emerged from the Thule during the decline of whaling near the end of the period known as the Little Ice Age, which lasted from the 16th to 19th century. This ultimately led the Inuit to adopt the hunting of walruses at the edges of ice packs and the hunting of seals at their breathing holes.

Previous studies hinted that some modern Native Americans, such as the Athabascans in northwestern North America, might be descended from the Paleo-Eskimos. However, these findings now quash that idea. “The results of this paper have a bearing not just on the peopling of the Arctic, but also the peopling of the Americas,” lead study author Maanasa Raghavan, a molecular biologist at the University of Copenhagen’s National Museum of Natural History in Denmark, told Live Science.

The new findings suggest the Paleo-Eskimos apparently survived in near-isolation for more than 4,000 years. The arrival of Paleo-Eskimos into the Americas was its own independent migration event, with Paleo-Eskimos genetically distinct from both the Neo-Eskimos and modern Native Americans.

“I was actually surprised that we don’t find any evidence of mixture between Native Americans and Paleo-Eskimos,” said study co-author Eske Willerslev, an evolutionary geneticist also at the University of Copenhagen’s National Museum of Natural History. “In other studies, when we see people meeting each other, they might be fighting each other, but normally they actually also have sex with each other, but that doesn’t seem to really have been the case here. They must have been coexisting for thousands of years, so at least from a genetic point of view, the lack of mixture between those two groups was a bit surprising.”

The reason the Paleo-Eskimos may not have mixed with the Neo-Eskimos or the ancestors of modern Native Americans was “because they had such an entirely different mindset,” Fitzhugh said. “Their religions were completely different, their resources and their technologies were different. When you have people who are so close to nature as the Paleo-Eskimos had to be to survive, they had to be extremely careful about maintaining good relationships with the animals, and that meant not polluting the relationship by introducing new ideas, new rituals, new materials and so forth.”

The researchers did find evidence of gene flow between Paleo-Eskimos and Neo-Eskimos. However, this likely occurred before the groups migrated to the New World, back in Siberia, among the common ancestors of both lineages.  The new evidence suggests that in the American Arctic, the two groups largely stayed separate.

In addition, while differences in the artifacts and architecture of the Pre-Dorset and Dorset had led previous studies to suggest they had different ancestral populations, these new findings suggest the Early and Late Paleo-Eskimos did share a common ancestral group. “The pre-Dorset people, the Dorset ancestors, seemed to have morphed into Dorset culture,” Fitzhugh told Live Science.

One mystery these findings help solve is the origin of the Sadlermiut people, who survived until the beginning of the 20th century in the region near Canada’s Hudson Bay, until the last of them perished from a disease introduced by whalers. The Sadlermiut avoided interaction with everyone outside their own society, and according to their Inuit neighbors, the Sadlermiut spoke a strange dialect, were bad at skills the Inuit considered vital, such as constructing igloos and tending oil lamps, were unclean, and did not observe standard Inuit taboos, all of which suggested that the Sadlermiut were descended from Paleo-Eskimos instead of Neo-Eskimos.

However, these new findings revealed the Sadlermiut showed evidence of only Inuit ancestry. Their cultural differences from other Inuit may have been the result of their isolation.

It remains a mystery why the Dorset people ultimately died off. Previous studies suggested the Dorset were absorbed by the expanding Thule population — and the Thule did adopt Dorset harpoon types, soapstone lamps and pots, and snow houses. However, these new findings do not find evidence of interbreeding between the groups.

One possibility is that the rise of the Thule represented “an example of prehistoric genocide,” Fitzhugh said. “The lack of significant genetic mixing might make it appear so.” However, Thule legends of the Dorset “tell only of friendly relations with a race of gentle giants,” Fitzhugh added.

Another possibility is that diseases introduced by Vikings or the Thule may have triggered the collapse of the Dorset, Fitzhugh said. However, “if it’s disease, then you’d expect to find dead bodies of Dorset people in their houses, and that’s never been found,” Fitzhugh said. [Fierce Fighters: 7 Secrets of Viking Seamen]

To help solve this and other remaining mysteries about the peopling of the New World Arctic, the researchers plan to look at more ancient human remains in both the Americas and Asia. The scientists detailed their findings in the Aug. 29 issue of the journal Science.

 

Quantum physics enables revolutionary imaging method (Science Daily)

Date: August 28, 2014

Source: University of Vienna

Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI

Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.

The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.

With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.

Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.

 

Journal Reference:

  1. Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons.Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586

Inside the teenage brain: New studies explain risky behavior (Science Daily)

Date: August 27, 2014

Source: Florida State University

Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.


Young man (stock image). “Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general. Credit: © iko / Fotolia

It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.

The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”

“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.

“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”

An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:

• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.

• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.

• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.

“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.

“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”

Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at: http://www.karger.com/Book/Home/261996

The table of contents to the special journal volume can be found at: http://www.karger.com/Journal/Issue/261977

Agamben: o pensamento como coragem (Outras Palavras)

cid_ii_13f5e188886c0087

Filósofo italiano contesta quem o vê como pessimista, cita Marx e sustenta: “condições desesperadoras da sociedade em que vivo me enchem de esperança”

Entrevista a Juliette Cerf, na Verso | Tradução Pedro Lucas Dulci

Como os sinos da igreja tocam em Trastevere, onde marcamos nosso encontro, seu rosto vem à mente… Giorgio Agamben apareceu como o apóstolo Filipe em O Evangelho Segundo São Mateus (1964) de Pier Paolo Pasolini. Naquela época, o jovem estudante de Direito, nascido em Roma em 1942, andava com os artistas e intelectuais agrupados em torno da autora Elsa Morante.  Uma Dolce Vita? Um momento de amizades intensas, em todo caso. Pouco a pouco, o jurista virou-se para a filosofia, após seminário de Heidegger em Thor-en-Provence. Então ele lançou-se sobre a edição das obras de Walter Benjamin, um pensador que nunca esteve longe de seu pensamento, bem como Guy Debord e Michel Foucault. Giorgio Agamben tornou-se, assim, familiarizado com um sentido messiânico da História, uma crítica à sociedade do espetáculo, e uma resistência ao biopoder, o controle que as autoridades exercem sobre a vida – mais propriamente dos corpos dos cidadãos. Poético, tal como político, seu pensamento escava as camadas em busca de evidências arqueológicas, fazendo o seu caminho de volta através do turbilhão do tempo, até as origens das palavras. Autor de uma série de obras reunidas sob o título latino Homo sacer, Agamben percorre a terra da lei, da religião e da literatura, mas agora se recusa a ir… para os Estados Unidos, para evitar ser submetido a seus controles biométricos. Em oposição a essa redução de um homem aos seus dados biológicos, Agamben propõe uma exploração do campo de possibilidades.

Berlusconi caiu, como vários outros líderes europeus. Tendo escrito sobre a soberania, quais os pensamentos que esta situação sem precedentes provocar em você? 

O poder público está perdendo legitimidade. A suspeita mútua se desenvolveu entre as autoridades e os cidadãos. Essa desconfiança crescente tem derrubado alguns regimes. As democracias são muito preocupadas: de que outra forma se poderia explicar que elas têm uma política de segurança duas vezes pior do que o fascismo italiano teve? Aos olhos do poder, cada cidadão é um terrorista em potencial. Nunca se esqueça de que o dispositivo biométrico, que em breve será inserido na carteira de identidade de cada cidadão, em primeiro lugar, foi criado para controlar os criminosos reincidentes.

Para usar o vocabulário da medicina antiga, a crise marca o momento decisivo da enfermidade. Mas hoje, a crise não é mais temporária: é a própria condução do capitalismo, seu motor interno. A crise está continuamente em curso, uma vez que, assim como outros mecanismos de exceção, permite que as autoridades imponham medidas que nunca seriam capazes de fazer funcionar em um período normal. A crise corresponde perfeitamente – por mais engraçado que possa parecer – ao que as pessoas na União Soviética costumavam chamar de “a revolução permanente”.Essa crise está ligada ao fato de que a economia tem roubado um caminho na política? 

CAPA FINAL PROFANAÇÕES.indd

A teologia desempenha um papel muito importante em sua reflexão de hoje. Por que isso? 

Os projetos de pesquisa que eu tenho recentemente realizado mostraram-me que as nossas sociedades modernas, que afirmam ser seculares, são, pelo contrário, regidas por conceitos teológicos secularizados, que agem de forma muito mais poderosa, uma vez que não estamos conscientes de sua existência. Nós nunca vamos entender o que está acontecendo hoje, se não entendermos que o capitalismo é, na realidade, uma religião. E, como disse Walter Benjamin, é a mais feroz de todas as religiões, porque não permite a expiação… Tome a palavra “fé”, geralmente reservado à esfera religiosa. O termo grego correspondente a este nos Evangelhos é pistis. Um historiador da religião, tentando entender o significado desta palavra, foi dar um passeio em Atenas um dia quando de repente ele viu uma placa com as palavras “Trapeza tes pisteos”. Ele foi até a placa, e percebeu que esta era de um banco: Trapeza tes pisteos significa: “banco de crédito”. Isto foi esclarecedor o suficiente.

O que essa história nos diz? 

Pistis, fé, é o crédito que temos com Deus e que a palavra de Deus tem conosco. E há uma grande esfera em nossa sociedade que gira inteiramente em torno do crédito. Esta esfera é o dinheiro, e o banco é o seu templo. Como você sabe, o dinheiro nada mais é que um crédito: em notas em dólares e libras (mas não sobre o euro, e que deveriam ter levantado as sobrancelhas…), você ainda pode ler que o banco central vai pagar ao portador o equivalente a este crédito. A crise foi desencadeada por uma série de operações com créditos que foram dezenas de vezes re-vendidos antes que pudessem ser realizados. Na gestão de crédito, o Banco – que tomou o lugar da Igreja e dos seus sacerdotes – manipula-se a fé e a confiança do homem. Se a política está hoje em retirada, é porque o poder financeiro, substituindo a religião, raptou toda a fé e toda a esperança. É por isso que eu estou realizando uma pesquisa sobre a religião e a lei: a arqueologia parece-me ser a melhor maneira de acessar o presente. Os europeus não podem acessar o seu presente sem julgarem o seu passado.

O que é este método arqueológico? 

É uma pesquisa sobre a archè, que em grego significa “início” e “mandamento”. Em nossa tradição, o início é tanto o que dá origem a algo como também é o que comanda sua história. Mas essa origem não pode ser datada ou cronologicamente situada: é uma força que continua a agir no presente, assim como a infância que, de acordo com a psicanálise, determina a atividade mental do adulto, ou como a forma com que o big bang, de acordo com os astrofísicos, deu origem ao Universo e continua em expansão até hoje. O exemplo que tipifica esse método seria a transformação do animal para o humano (antropogênese), ou seja, um evento que se imagina, necessariamente, deve ter ocorrido, mas não terminou de uma vez por todas: o homem é sempre tornar-se humano, e, portanto, também continua a ser inumano, animal. A filosofia não é uma disciplina acadêmica, mas uma forma de medir-se em direção a este evento, que nunca deixa de ter lugar e que determina a humanidade e a desumanidade da humanidade: perguntas muito importantes, na minha opinião.

Essa visão de tornar-se humano, em suas obras, não é bastante pessimista? 

Estou muito feliz que você me fez essa pergunta, já que muitas vezes eu encontro com pessoas que me chamam de pessimista. Em primeiro lugar, em um nível pessoal, isto não é verdade em todos os casos. Em segundo lugar, os conceitos de pessimismo e de otimismo não têm nada a ver com o pensamento. Debord citou muitas vezes uma carta de Marx, dizendo que “as condições desesperadoras da sociedade em que vivo me enchem de esperança”. Qualquer pensamento radical sempre adota a posição mais extrema de desespero. Simone Weil disse: “Eu não gosto daquelas pessoas que aquecem seus corações com esperanças vazias”. Pensamento, para mim, é exatamente isso: a coragem de desesperança. E isso não está na altura do otimismo?

De acordo com você, ser contemporâneo significa perceber a escuridão de sua época e não a sua luz. Como devemos entender essa ideia? 

Ser contemporâneo é responder ao apelo que a escuridão da época faz para nós. No Universo em expansão, o espaço que nos separa das galáxias mais distantes está crescendo a tal velocidade que a luz de suas estrelas nunca poderia chegar até nós. Perceber, em meio à escuridão, esta luz que tenta nos atingir, mas não pode – isso é o que significa ser contemporâneo. O presente é a coisa mais difícil para vivermos. Porque uma origem, eu repito, não se limita ao passado: é um turbilhão, de acordo com a imagem muito fina de Benjamin, um abismo no presente. E somos atraídos para este abismo. É por isso que o presente é, por excelência, a única coisa que resta não vivida.

Quem é o supremo contemporâneo – o poeta? Ou o filósofo? 

Minha tendência é não opor a poesia à filosofia, no sentido de que essas duas experiências tem lugar dentro da linguagem. A casa de verdade é a linguagem, e eu desconfiaria de qualquer filósofo que iria deixá-la para outros – filólogos ou poetas – cuidarem desta casa. Devemos cuidar da linguagem, e eu acredito que um dos problemas essenciais com os meios de comunicação é que eles não mostram tanta preocupação. O jornalista também é responsável pela linguagem, e será por ela julgado.

Como é o seu mais recente trabalho sobre a liturgia nos dá uma chave para o presente? 

Analisar liturgia é colocar o dedo sobre uma imensa mudança em nossa maneira de representar existência. No mundo antigo, a existência estava ali – algo presente.  Na liturgia cristã, o homem é o que ele deve ser e deve ser o que ele é. Hoje, não temos outra representação da realidade do que a operacional, o efetivo. Nós já não concebemos uma existência sem sentido. O que não é eficaz – viável, governável – não é real. A próxima tarefa da filosofia é pensar em uma política e uma ética que são liberados dos conceitos do dever e da eficácia.

Pensando na inoperosidade, por exemplo?

A insistência no trabalho e na produção é uma maldição. A esquerda foi para o caminho errado quando adotou estas categorias, que estão no centro do capitalismo. Mas devemos especificar que inoperosidade, da forma como a concebo, não é nem inércia, nem uma marcha lenta. Precisamos nos libertar do trabalho, em um sentido ativo – eu gosto muito da palavra em francês désoeuvrer. Esta é uma atividade que faz todas as tarefas sociais da economia, do direito e da religião inoperosas, libertando-os, assim, para outros usos possíveis. Precisamente por isso é apropriado para a humanidade: escrever um poema que escapa a função comunicativa da linguagem; ou falar ou dar um beijo, alterando, assim, a função da boca, que serve em primeiro lugar para comer. Em sua Ética a Nicômaco, Aristóteles perguntou a si mesmo se a humanidade tem uma tarefa. O trabalho do flautista é tocar a flauta, e o trabalho do sapateiro é fazer sapatos, mas há um trabalho do homem como tal? Ele então desenvolveu a sua hipótese segundo a qual o homem, talvez, nasce sem qualquer tarefa, mas ele logo abandona este estado. No entanto, esta hipótese nos leva ao cerne do que é ser humano. O ser humano é o animal que não tem trabalho: ele não tem tarefa biológica, não tem uma função claramente prescrita. Só um ser poderoso tem a capacidade de não ser poderoso. O homem pode fazer tudo, mas não tem que fazer nada.

Você estudou Direito, mas toda a sua filosofia procura, de certa forma, se libertar da lei. 

Saindo da escola secundária, eu tinha apenas um desejo – escrever. Mas o que isso significa? Para escrever – o que? Este foi, creio eu, um desejo de possibilidade na minha vida. O que eu queria não era a “escrever”, mas “ser capaz de” escrever. É um gesto inconscientemente filosófico: a busca de possibilidades em sua vida, o que é uma boa definição de filosofia. A lei é, aparentemente, o contrário: é uma questão de necessidade, não de possibilidade. Mas quando eu estudei direito, era porque eu não poderia, é claro, ter sido capaz de acessar o possível sem passar no teste do necessário. Em qualquer caso, os meus estudos de direito tornaram-se muito úteis para mim. Poder desencadeou conceitos políticos em favor dos conceitos jurídicos. A esfera jurídica não pára de expandir-se: eles fazem leis sobre tudo, em domínios onde isto teria sido inconcebível. Esta proliferação de lei é perigosa: nas nossas sociedades democráticas, não há nada que não é regulamentado. Juristas árabes me ensinaram algo que eu gostei muito. Eles representam a lei como uma espécie de árvore, em que em um extremo está o que é proibido e, no outro, o que é obrigatório. Para eles, o papel do jurista situa-se entre estes dois extremos: ou seja, abordando tudo o que se pode fazer sem sanção jurídica. Esta zona de liberdade nunca para de estreitar-se, enquanto que deveria ser expandida.

Em 1997, no primeiro volume de sua série Homo Sacer, você disse que o campo de concentração é a norma do nosso espaço político. De Atenas a Auschwitz… 

Tenho sido muito criticado por essa idéia, que o campo tem substituído a cidade como o nomos (norma, lei) da modernidade. Eu não estava olhando para o campo como um fato histórico, mas como a matriz oculta da nossa sociedade. O que é um campo? É uma parte do território que existe fora da ordem jurídico-política, a materialização do estado de exceção. Hoje, o estado de exceção e a despolitização penetraram tudo. É o espaço sob vigilância CCTV [circuito interno de monitoramento] nas cidades de hoje, públicas ou privadas, interiores ou exteriores? Novos espaços estão sendo criados: o modelo israelense de território ocupado, composto por todas essas barreiras, excluindo os palestinos, foi transposto para Dubai para criar ilhas hiper-seguras de turismo…

Em que fase está o Homo sacer? 

Quando comecei esta série, o que me interessou foi a relação entre a lei e a vida. Em nossa cultura, a noção de vida nunca é definida, mas é incessantemente dividida: há a vida como ela é caracterizada politicamente (bios), a vida natural comum a todos os animais (zoé), a vida vegetativa, a vida social, etc. Talvez pudéssemos chegar a uma forma de vida que resiste a tais divisões? Atualmente, estou escrevendo o último volume de Homo sacer. Giacometti disse uma coisa que eu realmente gostei: você nunca termina uma pintura, você a abandona. Suas pinturas não estão acabadas, seu potencial nunca se esgota. Gostaria que o mesmo fosse verdade sobre Homo sacer, para ser abandonado, mas nunca terminado. Além disso, eu acho que a filosofia não deve consistir-se demais em afirmações teóricas – a teoria deve, por vezes, mostrar a sua insuficiência.

É esta a razão pela qual em seus ensaios teóricos você tem sempre escrito textos mais curtos, mais poéticos?

Sim, exatamente isso. Estes dois registros de escrita não ficam em contradição, e espero que muitas vezes até mesmo se cruzem. Foi a partir de um grande livro, O Reino e a Glória, uma genealogia do governo e da economia, que eu fui fortemente atingido por essa noção de inoperosidade, o que eu tentei desenvolver de forma mais concreta em outros textos. Esses caminhos cruzados são todos o prazer de escrever e de pensar.

Working Undercover in a Slaughterhouse: an interview with Timothy Pachirat (Medium)

Timothy Pachirat, is Assistant Professor of Political Science at the University of Massachusetts Amherst and the author of Every Twelve Seconds: Industrialized Slaughter and the Politics of Sight, an ethnographic account of his undercover job in a cattle slaughterhouse. Pachirat’s book reveals the timeless human pattern of hidden violence and reluctance to awaken to unpleasant realities that we are all implicated in by the very fact of living together in society. I interviewed him in 2012 as part of my MetaHack interview series .

 

Avi Solomon: Tell us a bit about yourself.

Timothy Pachirat: I was born and raised in northeastern Thailand in a Thai-American family. In high school, I spent a year in the high desert of rural Oregon as an exchange student where I worked on a cattle ranch, farmed alfalfa, and—improbably—became a running back for the school’s football team. Since then, I’ve lived in Illinois, Indiana, Connecticut, Alabama, Nebraska, and New York City working as a builder of housing trusses, a pizza deliverer, a behavioral therapist for children diagnosed with autism, a stay-at-home-dad, a graduate student, a slaughterhouse worker, and as an assistant professor of politics.

 

Timothy Pachirat

Avi: What alerted you to the importance of doing ethnographic fieldwork?

Timothy: Like many mixed-race, mixed-culture, and mixed-language kids, I developed something of an innate ethnographic sensibility by virtue of the complex cultural terrain I grew up in. Long before I’d ever heard the word ‘ethnography,’ for example, I spent my undergraduate fall and spring breaks sleeping alongside and getting to know unhoused men and women on Lower Wacker Drive in Chicago as a way of making some sense of the vast inequalities I perceived in American society and in the world. While pursuing a Ph.D. in political science at Yale University, it seemed natural to gravitate to a research orientation that would allow me to engage bodily—as participant and as observer—with the lived experiences of people I might not otherwise ever come into contact with. I was learning a lot of fancy theories that were thrilling on paper, and I was learning some powerful techniques of statistical analysis, but only ethnography allowed me to weigh those made-in-the-academy concepts and techniques against the situated, specific, and beautifully complex lived experiences of the actual social worlds those concepts and techniques purported to describe and explain.

 

Avi: Why did you choose to go undercover in a slaughterhouse?

Timothy: I wanted to understand how massive processes of violence become normalized in modern society, and I wanted to do so from the perspective of those who work in the slaughterhouse. My hunch was that close attention to how the work of industrialized killing is performed might illuminate not only how the realities of industrialized animal slaughter are made tolerable, but also the way distance and concealment operate in analogous social processes: war executed by volunteer armies; the subcontracting of organized terror to mercenaries; and the violence underlying the manufacturing of thousands of items and components we make contact with in our everyday lives. Like its more self-evidently political analogues—the prison, the hospital, the nursing home, the psychiatric ward, the refugee camp, the detention center, the interrogation room, and the execution chamber—the modern industrialized slaughterhouse is ‘zone of confinement,’ a ‘segregated and isolated territory,’ in the words of sociologist Zygmunt Bauman, ‘Invisible,’ and ‘on the whole inaccessible to ordinary members of society.’ I worked as an entry level worker on the kill floor of an industrialized slaughterhouse in order to understand, from the perspective of those who participate directly in them, how these zones of confinement operate.

Avi: Can you tell us about the slaughterhouse you worked in?

Timothy: Because my goal was not to write an expose of a particular place, I do not name the Nebraska slaughterhouse I worked in or use real names for the people I encountered there. The slaughterhouse employs nearly eight hundred nonunionized workers, the vast majority being immigrants from Central and South America, Southeast Asia, and East Africa. It generates over $820 million annually in sales to distributors within and outside of the United States and ranks among the top handful of cattle-slaughtering facilities worldwide in volume of production. The line speed on the kill floor is approximately three hundred cattle per hour, or one every twelve seconds. In a typical workday, between twenty-two and twenty-five hundred cattle are killed there, adding up to well over ten thousand cattle killed per five-day week, or more than half a million cattle slaughtered each year.

Avi: What jobs did you end up doing there?

Timothy: My first job was as a liver hanger in the cooler. For ten hours each day, I stood in 34 degrees cold and took freshly eviscerated livers off an overhead line and hung them on carts to be chilled for packing. I was then moved to the chutes, where I drove live cattle into the knocking box where they were shot in the head with a captive bolt gun. Finally, I was promoted to a quality-control position, a job that gave me access to every part of the kill floor and made me an intermediary between the USDA federal meat inspectors and the kill floor managers.

Avi: How did you acclimatize to the work?

Timothy: Slowly and painfully. Each job came with its own set of physical, psychological, and emotional challenges. Although it was physically demanding, my main battle hanging livers in the cooler was with the unbearable monotony. Pranks, jokes, and even physical pain became ways of negotiating that monotony. Working in the chutes took me out of the sterilized environment of the cooler and forced a confrontation with the pain and fear of each individual animal as they were driven up the serpentine line into the knocking box. Working as a quality control worker forced me to master a set of technical and bureaucratic requirements even as it made me complicit in surveillance and disciplining my former coworkers on the line. Although it’s been over seven years since I left the kill floor, I am still struck by the continued emotional and psychological impacts that come from direct participation in the routinized taking of life.

Avi: How did your coworkers treat you?

Timothy: I would never have lasted more than a few days in the slaughterhouse were it not for the kindness, acceptance, and, in some cases, friendship of my fellow line workers. They showed me how to do the work, bailed me out when I screwed up, and, more importantly, taught me how to survive the work. Still, there were divisions and tensions amongst the workers based on race, gender, and job responsibilities. In addition to showing the forms of solidarity amongst the workers, my book also details these tensions and how I navigated them.

 

“Knocking” Box

Avi: Who is a “knocker”?

Timothy: The knocker is the worker who stands at the knocking box and shoots each individual animal in the head with a captive bolt steel gun. Of 121 distinct kill floor jobs that I map and describe in the book, only the knocker both sees the cattle while sentient and delivers the blow that is supposed to render them insensible. On an average day, this lone worker shoots 2,500 individual animals at a rate of one every twelve seconds.

Avi: Who else is directly involved in killing each cow?

Timothy: After the knocker shoots the cattle, they fall onto a conveyor belt where they are shackled and hoisted onto an overhead line. Hanging upside down by their hind legs, they travel through a series of ninety degree turns that take them out of the knocker’s line of sight. There, a presticker and sticker sever the carotid arteries and jugular veins. The animals then bleed out as they travel further down the overhead chain to the tail ripper, who begins the process of removing their body parts and hides. Of over 800 workers on the kill floor, only four are directly involved in the killing of the cattle and less than 20 have a line of sight to the killing.

Avi: Were you able to interview any knockers?

Timothy: I was not able to directly interview the knocker, but I spoke with many other workers about their perceptions of the knocker. There is a kind of collective mythology built up around this particular worker, a mythology that allows for an implicit moral exchange in which the knocker alone performs the work of killing, while the work of the other 800 slaughterhouse workers is morally unrelated to that killing. It is a fiction, but a convincing one: of all the workers in the slaughterhouse, only the knocker delivers the blow that begins the irreversible process of transforming the live creatures into dead ones. If you listen carefully enough to the hundreds of workers performing the 120 other jobs on the kill floor, this might be the refrain you hear: ‘Only the knocker.’ It is simple moral math: the kill floor operates with 120+1 jobs. And as long as the 1 exists, as long as there is some plausible narrative that concentrates the heaviest weight of the dirtiest work on this 1, then the other 120 kill floor workers can say, and believe it, ‘I’m not going to take part in this.’

Avi: What are the main strategies used to hide violence in the slaughterhouse?

Timothy: The first and most obvious is that the violence of industrialized killing is hidden from society at large. Over 8.5 billion animals are killed for food each year in the United States, but this killing is carried out by a small minority of largely immigrant workers who labor behind opaque walls, most often in rural, isolated locations far from urban centers. Furthermore, laws supported by the meat and livestock industries are currently under consideration in six states that criminalize the publicizing of what happens in slaughterhouses and other animal facilities without the consent of the slaughterhouse owners. Iowa’s House of Representatives, for example, forwarded a bill to the Iowa Senate last year that would make it a felony to distribute or possess video, audio, or printed material gleaned through unauthorized access to a slaughterhouse or animal facility.

Second, the slaughterhouse as a whole is divided into compartmentalized departments. The front office is isolated from the fabrication department, which is in turn isolated from the cooler, which is in turn isolated from the kill floor. It is entirely possible to spend years working in the front office, fabrication department, or cooler of an industrialized slaughterhouse that slaughters over half a million cattle per year without ever once encountering a live animal much less witnessing one being killed.

 

Cattle Kill Floor Plan

But third and most importantly, the work of killing is hidden even at the site where one might expect it to be most visible: the kill floor itself. The complex division of labor and space acts to compartmentalize and neutralize the experience of “killing work” for each of the workers on the kill floor. I’ve already mentioned the division of labor in which only a handful of workers, out of a total workforce of over 800, are directly involved in or even have a line of sight to the killing of the animals. To give another example, the kill floor is divided spatially into a clean side and a dirty side. The dirty side refers to everything that happens while the cattle’s hides are still on them and the clean side to everything that happens after the hides have been removed. Workers from the clean side are segregated from workers on the dirty side, even during food and bathroom breaks. This translates into a kind of phenomenological compartmentalization where the minority of workers who deal with the “animals” while their hides are still on are kept separate from the majority of workers who deal with the *carcasses* after their hides have been removed. In this way, the violence of turning animal into carcass is quarantined amongst the dirty side workers, and even there it is further confined by finer divisions of labor and space.

In addition to spatial and labor divisions, the use of language is another way of concealing the violence of killing. From the moment cattle are unloaded from transport trucks into the slaughterhouse’s holding pens, managers and kill floor supervisors refer to them as ‘beef.’ Although they are living, breathing, sentient beings, they have already linguistically been reduced to inanimate flesh, to use-objects. Similarly, there is a slew of acronyms and technical language around the food safety inspection system that reduces the quality control worker’s job to a bureaucratic, technical regime rather than one that is forced to confront the truly massive taking of life. Although the quality control worker has full physical movement throughout the kill floor and sees every aspect of the killing, her interpretive frame is interdicted by the technical and bureaucratic requirements of the job. Temperatures, hydraulic pressures, acid concentrations, bacterial counts, and knife sanitization become the primary focus, rather than the massive, unceasing taking of life.

Avi: Is anyone working in the slaughterhouse consciously aware of these strategies?

Timothy: I don’t think anyone sat down and said, ‘Let’s design a slaughtering process that creates a maximal distance between each worker and the violence of killing and allows each worker to contribute without having to confront the violence directly.’ The division between clean and dirty side on the kill floor mentioned earlier, for example, is overtly motivated by a food-safety logic. The cattle come into the slaughterhouse caked in feces and vomit, and from a food-safety perspective the challenge is to remove the hides while minimizing the transfer of these contaminants to the flesh underneath. But what’s fascinating is that the effects of these organizations of space and labor are not just increased ‘efficiency’ or increased ‘food-safety’ but also the distancing and concealment of violent processes even from those participating directly in them. From a political point of view, from a point of view interested in understanding how relations of violent domination and exploitation are reproduced, it is precisely these effects that matter most.

 

Auschwitz Death Factory Plan by Sonderkommando survivor David Olere

Avi: Did the death factories of Auschwitz have the same mechanisms at work?

Timothy: I recommend Zygmunt Bauman’s superb book, Modernity and the Holocaust, for those interested in how parallel mechanisms of distance, concealment, and surveillance worked to neutralize the killing work taking place in Auschwitz and other concentration camps. The lesson here, of course, is not that slaughterhouses and genocides are morally or functionally equivalent, but rather that large-scale, routinized, and systematic violence is entirely consistent with the kinds of bureaucratic structures and mechanisms we typically associate with modern civilization. The French sociologist Norbert Elias argues—convincingly, in my view—that it is the “concealment” and “displacement” of violence, rather than its elimination or reduction, that is the hallmark of civilization. In my view, the contemporary industrialized slaughterhouse provides an exemplary case that highlights some of the most salient features of this phenomenon.

Avi: Violence is found hidden in even the most “normal” of lives. How can we spot this pervading presence in our daily life?

Timothy: We—the ‘we’ of the relatively affluent and powerful—live in a time and a spatial order in which the ‘normalcy’ of our lives requires our active complicity in forms of exploitation and violence that we would decry and disavow were the physical, social, and linguistic distances that separate us from them ever to be collapsed. This is true of the brutal and entirely unnecessary confinement and killing of billions of animals each year for food, of the exploitation and suffering of workers in Shenzhen, China who produce our iPads and cell phones, of the ‘enhanced interrogation techniques’ deployed in the name of our security, and of the ‘collateral damage’ created by the unmanned-aerial-vehicles that our taxes fund. Our complicity lies not in a direct infliction of violence but rather in our tacit agreement to look away and not to ask some very, very simple questions: Where does this meat come from and how did it get here? Who assembled the latest gadget that just arrived in the mail? What does it mean to create categories of torturable human beings? The mechanisms of distancing and concealment inherent in our divisions of space and labor and in our unthinking use of euphemistic language make it seductively easy to avoid pursuing the complex answers to these simple questions with any sort of determination.

Months after I left the slaughterhouse, I got in an argument with a brilliant friend over who was more morally responsible for the killing of the animals: those who ate meat or the 121 workers who did the killing. She maintained, passionately and with conviction, that the people who did the killing were more responsible because they were the ones performing the physical actions that took the animal’s lives. Meat eaters, she claimed, were only indirectly responsible. At the time, I took the opposite position, holding that those who benefited at a distance, delegating this terrible work to others while disclaiming responsibility for it, bore more moral responsibility, particularly in contexts like the slaughterhouse, where those with the fewest opportunities in society performed the dirty work.

I am now more inclined to think that it is the preoccupation with moral responsibility itself that serves as a deflection. In the words of philosopher John Lachs, ‘The responsibility for an act can be passed on, but its experience cannot.’ I’m keenly interested in asking what it might mean for those who benefit from physically and morally dirty work not only to assume some share of responsibility for it but also to directly experience it. What might it mean, in other words, to collapse some of the mechanisms of physical, social, and linguistic distances that separate our ‘normal’ lives from the violence and exploitation required to sustain and reproduce them? I explore some of these questions at greater length in the final chapter of my book.

 

Avi: Who was Cinci Freedom? What mythologizing purpose does she serve?

Timothy: I open the book with the story of a cow that escaped from a slaughterhouse up the street from the one I was working in. Omaha police chased the cow and cornered it in an alleyway that bordered my slaughterhouse. It happened to be during our ten minute afternoon break and many of the slaughterhouse workers witnessed the police opening fire on the animal with shotguns. The next day in the lunchroom, the anger, disgust, and horror at the police killing of the animal was palpable, as was the strong sense of identification with the animal’s treatment at the hands of the police. And yet, at the end of lunch break, workers returned to work on a kill floor that killed 2,500 animals each day.

Cinci Freedom was another Charolais cow that escaped from a Cincinnati slaughterhouse in 2002. She was recaptured after several days only with the help of thermal imaging equipment deployed from a police helicopter. Unlike the anoymous Omaha cow that was gunned down by the police, Cinci Freedom became an instant celebrity. The mayor gave her a key to the city and she was provided passage to The Farm Sanctuary in Watkins Glen, NY, where she lived until 2008.

Although at first glance the fates of the Omaha cow and of Cinci Freedom are very different, I think both responses are equally effective ways of neutralizing the threat posed by these animals. Their escapes from the slaughterhouse were not just physical escapes but also conceptual escapes, moments of rupture in an otherwise routine and normalized system of industrialized killing. Extermination and elevation to celebrity status (not unlike the ritual presidential pardoning of the Thanksgiving turkey) are both ways of containing the dangers posed by these moments of conceptual rupture. They also point to the promises and limitations of rupture as a political tactic, for example the digital ruptures that occur with the release of shocking undercover footage from slaughterhouses and other zones of confinement where the work of violence is routinely carried out on our behalf.

Do we live in a 2-D hologram? Experiment will test the nature of the universe (Science Daily)

Date: August 26, 2014

Source: DOE/Fermi National Accelerator Laboratory

Summary: A unique experiment called the Holometer has started collecting data that will answer some mind-bending questions about our universe — including whether we live in a hologram. 

A Fermilab scientist works on the laser beams at the heart of the Holometer experiment. The Holometer will use twin laser interferometers to test whether the universe is a 2-D hologram. Credit: Fermilab

A unique experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory called the Holometer has started collecting data that will answer some mind-bending questions about our universe — including whether we live in a hologram.

Much like characters on a television show would not know that their seemingly 3-D world exists only on a 2-D screen, we could be clueless that our 3-D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.

Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale.

“We want to find out whether space-time is a quantum system just like matter is,” said Craig Hogan, director of Fermilab’s Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.”

Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2-D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle (as quantum waves) even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state.

Essentially, the experiment probes the limits of the universe’s ability to store information. If there is a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location — even in principle. The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself.

Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam (the equivalent of 200,000 laser pointers) at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way — being carried along on a jitter of space itself.

“Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high — millions of cycles per second — that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources.

“If we find a noise we can’t get rid of, we might be detecting something fundamental about nature — a noise that is intrinsic to space-time,” said Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.”

The Holometer experiment, funded by the U.S. Department of Energy Office of Science and other sources, is expected to gather data over the coming year.

Gut bacteria that protect against food allergies identified (Science Daily)

Date: August 25, 2014

Source: University of Chicago Medical Center

Summary: The presence of Clostridia, a common class of gut bacteria, protects against food allergies, a new study in mice finds. The discovery points toward probiotic therapies for this so-far untreatable condition. Food allergies affect 15 million Americans, including one in 13 children, who live with this potentially life-threatening disease that currently has no cure, researchers note.

Artist’s rendering of bacteria (stock illustration). Credit: © zuki70 / Fotolia

The presence of Clostridia, a common class of gut bacteria, protects against food allergies, a new study in mice finds. By inducing immune responses that prevent food allergens from entering the bloodstream, Clostridia minimize allergen exposure and prevent sensitization — a key step in the development of food allergies. The discovery points toward probiotic therapies for this so-far untreatable condition, report scientists from the University of Chicago, Aug 25 in the Proceedings of the National Academy of Sciences.

Although the causes of food allergy — a sometimes deadly immune response to certain foods — are unknown, studies have hinted that modern hygienic or dietary practices may play a role by disturbing the body’s natural bacterial composition. In recent years, food allergy rates among children have risen sharply — increasing approximately 50 percent between 1997 and 2011 — and studies have shown a correlation to antibiotic and antimicrobial use.

“Environmental stimuli such as antibiotic overuse, high fat diets, caesarean birth, removal of common pathogens and even formula feeding have affected the microbiota with which we’ve co-evolved,” said study senior author Cathryn Nagler, PhD, Bunning Food Allergy Professor at the University of Chicago. “Our results suggest this could contribute to the increasing susceptibility to food allergies.”

To test how gut bacteria affect food allergies, Nagler and her team investigated the response to food allergens in mice. They exposed germ-free mice (born and raised in sterile conditions to have no resident microorganisms) and mice treated with antibiotics as newborns (which significantly reduces gut bacteria) to peanut allergens. Both groups of mice displayed a strong immunological response, producing significantly higher levels of antibodies against peanut allergens than mice with normal gut bacteria.

This sensitization to food allergens could be reversed, however, by reintroducing a mix of Clostridia bacteria back into the mice. Reintroduction of another major group of intestinal bacteria, Bacteroides, failed to alleviate sensitization, indicating that Clostridia have a unique, protective role against food allergens.

Closing the door

To identify this protective mechanism, Nagler and her team studied cellular and molecular immune responses to bacteria in the gut. Genetic analysis revealed that Clostridia caused innate immune cells to produce high levels of interleukin-22 (IL-22), a signaling molecule known to decrease the permeability of the intestinal lining.

Antibiotic-treated mice were either given IL-22 or were colonized with Clostridia. When exposed to peanut allergens, mice in both conditions showed reduced allergen levels in their blood, compared to controls. Allergen levels significantly increased, however, after the mice were given antibodies that neutralized IL-22, indicating that Clostridia-induced IL-22 prevents allergens from entering the bloodstream.

“We’ve identified a bacterial population that protects against food allergen sensitization,” Nagler said. “The first step in getting sensitized to a food allergen is for it to get into your blood and be presented to your immune system. The presence of these bacteria regulates that process.” She cautions, however, that these findings likely apply at a population level, and that the cause-and-effect relationship in individuals requires further study.

While complex and largely undetermined factors such as genetics greatly affect whether individuals develop food allergies and how they manifest, the identification of a bacteria-induced barrier-protective response represents a new paradigm for preventing sensitization to food. Clostridia bacteria are common in humans and represent a clear target for potential therapeutics that prevent or treat food allergies. Nagler and her team are working to develop and test compositions that could be used for probiotic therapy and have filed a provisional patent.

“It’s exciting because we know what the bacteria are; we have a way to intervene,” Nagler said. “There are of course no guarantees, but this is absolutely testable as a therapeutic against a disease for which there’s nothing. As a mom, I can imagine how frightening it must be to worry every time your child takes a bite of food.”

“Food allergies affect 15 million Americans, including one in 13 children, who live with this potentially life-threatening disease that currently has no cure,” said Mary Jane Marchisotto, senior vice president of research at Food Allergy Research & Education. “We have been pleased to support the research that has been conducted by Dr. Nagler and her colleagues at the University of Chicago.”


Journal Reference:

  1. A. T. Stefka, T. Feehley, P. Tripathi, J. Qiu, K. McCoy, S. K. Mazmanian, M. Y. Tjota, G.-Y. Seo, S. Cao, B. R. Theriault, D. A. Antonopoulos, L. Zhou, E. B. Chang, Y.-X. Fu, C. R. Nagler. Commensal bacteria protect against food allergen sensitization. Proceedings of the National Academy of Sciences, 2014; DOI: 10.1073/pnas.1412008111

Cutting emissions pays for itself, study concludes (Science Daily)

Date: August 24, 2014

Source: Massachusetts Institute of Technology

Summary: Health care savings can greatly defray costs of carbon-reduction policies, experts report. But just how large are the health benefits of cleaner air in comparison to the costs of reducing carbon emissions? Researchers looked at three policies achieving the same reductions in the U.S., and found that the savings on health care spending and other costs related to illness can be big — in some cases, more than 10 times the cost of policy implementation. 


“Carbon-reduction policies significantly improve air quality,” says Noelle Selin. “In fact, policies aimed at cutting carbon emissions improve air quality by a similar amount as policies specifically targeting air pollution.” Credit: Illustration: Christine Daniloff/MIT

Lower rates of asthma and other health problems are frequently cited as benefits of policies aimed at cutting carbon emissions from sources like power plants and vehicles, because these policies also lead to reductions in other harmful types of air pollution.

But just how large are the health benefits of cleaner air in comparison to the costs of reducing carbon emissions? MIT researchers looked at three policies achieving the same reductions in the U.S., and found that the savings on health care spending and other costs related to illness can be big — in some cases, more than 10 times the cost of policy implementation.

“Carbon-reduction policies significantly improve air quality,” says Noelle Selin, an assistant professor of engineering systems and atmospheric chemistry at MIT, and co-author of a study published today inNature Climate Change. “In fact, policies aimed at cutting carbon emissions improve air quality by a similar amount as policies specifically targeting air pollution.”

Selin and colleagues compared the health benefits to the economic costs of three climate policies: a clean-energy standard, a transportation policy, and a cap-and-trade program. The three were designed to resemble proposed U.S. climate policies, with the clean-energy standard requiring emissions reductions from power plants similar to those proposed in the Environmental Protection Agency’s Clean Power Plan.

Health savings constant across policies

The researchers found that savings from avoided health problems could recoup 26 percent of the cost to implement a transportation policy, but up to to 10.5 times the cost of implementing a cap-and-trade program. The difference depended largely on the costs of the policies, as the savings — in the form of avoided medical care and saved sick days — remained roughly constant: Policies aimed at specific sources of air pollution, like power plants and vehicles, did not lead to substantially larger benefits than cheaper policies, like a cap-and-trade approach.

Savings from health benefits dwarf the estimated $14 billion cost of a cap-and-trade program. At the other end of the spectrum, a transportation policy with rigid fuel-economy requirements is the most expensive policy, costing more than $1 trillion in 2006 dollars, with health benefits recouping only a quarter of those costs. The price tag of a clean energy standard fell between the costs of the two other policies, with associated health benefits just edging out costs, at $247 billion versus $208 billion.

“If cost-benefit analyses of climate policies don’t include the significant health benefits from healthier air, they dramatically underestimate the benefits of these policies,” says lead author Tammy Thompson, now at Colorado State University, who conducted the research as a postdoc in Selin’s group.

Most detailed assessment to date

The study is the most detailed assessment to date of the interwoven effects of climate policy on the economy, air pollution, and the cost of health problems related to air pollution. The MIT group paid especially close attention to how changes in emissions caused by policy translate into improvements in local and regional air quality, using comprehensive models of both the economy and the atmosphere.

In addition to carbon dioxide, burning fossil fuels releases a host of other chemicals into the atmosphere. Some of these substances interact to form ground-level ozone, as well as fine particulate matter. The researchers modeled where and when these chemical reactions occurred, and where the resulting pollutants ended up — in cities where many people would come into contact with them, or in less populated areas.

The researchers projected the health effects of ground-level ozone and fine particulate matter, two of the biggest health offenders related to fossil-fuel emissions. Both pollutants can cause asthma attacks and heart and lung disease, and can lead to premature death.

In 2011, 231 counties in the U.S. exceeded the EPA’s regulatory standards for ozone, the main component of smog. Standards for fine particulate matter — airborne particles small enough to be inhaled deep into the lungs and even absorbed into the bloodstream — were exceeded in 118 counties.

While cutting carbon dioxide from current levels in the U.S. will result in savings from better air quality, pollution-related benefits decline as carbon policies become more stringent. Selin cautions that after a certain point, most of the health benefits have already been reaped, and additional emissions reductions won’t translate into greater improvements.

“While air-pollution benefits can help motivate carbon policies today, these carbon policies are just the first step,” Selin says. “To manage climate change, we’ll have to make carbon cuts that go beyond the initial reductions that lead to the largest air-pollution benefits.”

 

Journal Reference:

  1. Tammy M. Thompson, Sebastian Rausch, Rebecca K. Saari, Noelle E. Selin. A systems approach to evaluating the air quality co-benefits of US carbon policies. Nature Climate Change, 2014; DOI: 10.1038/nclimate2342

Conhecimento indígena é preservado em livro de papel sintético (Fapesp)

Obra descreve 109 espécies de plantas medicinais e seus usos. Pajé da etnia Huni Kuĩ, no Acre, é idealizador do projeto e papel é resultado de pesquisa apoiada pela FAPESP (foto:Pascale/divulgação)

25/08/2014

Por Karina Toledo

Agência FAPESP – Um papel sintético feito de plástico reciclado – resultado de uma pesquisa desenvolvida com apoio da FAPESP – está ajudando a preservar o conhecimento sobre plantas medicinais transmitido oralmente há séculos pelos pajés do povo Huni Kuĩ do rio Jordão, no Acre.

Descrições de 109 espécies usadas na terapêutica indígena, bem como informações sobre a região de ocorrência e as formas de tratamento, foram reunidas no livro Una Isĩ Kayawa, Livro da Cura, produzido pelo Instituto de Pesquisa do Jardim Botânico do Rio de Janeiro (IJBRJ) e pela Editora Dantes.

A obra teve uma tiragem de 3 mil exemplares em papel comum, cuchê, voltada ao grande público e lançada recentemente no Rio de Janeiro e em São Paulo. Outros mil exemplares, destinados exclusivamente aos índios, foram feitos com papel sintético, que é impermeável e tem a textura de papel cuchê, com o intuito de aumentar a durabilidade no ambiente úmido da floresta. O lançamento foi realizado com uma grande festa em uma das aldeias dos Huni Kuĩ do rio Jordão.  

O trabalho de pesquisa e organização das informações durou dois anos e meio e foi coordenado pelo botânico Alexandre Quinet, do Jardim Botânico do Rio de Janeiro. O grande idealizador do projeto, porém, foi o pajé Agostinho Manduca Mateus Ĩka Muru, que morreu pouco tempo antes de a obra ser concluída.

“O pajé Ĩka Muru era um cientista da floresta, observador das plantas. Há mais de 20 anos ele vinha reunindo esse conhecimento até então oral em seus caderninhos. Buscando informações com os mais antigos e transmitindo para os aprendizes de pajé. Ele tinha o sonho de registrar tudo em um livro impresso, como os brancos fazem, e deixar disponível para as gerações futuras”, contou Quinet.

Também conhecidos pelos nomes de “Kaxinawá” – termo que os índios não gostam – , “gente verdadeira” ou “gente do cipó”, os Huni Kuĩ formam o grupo indígena mais numeroso do Acre. Sua presença vai até parte do Peru. No Brasil, somam mais de 7 mil indivíduos, divididos em 12 diferentes terras. O “livro da cura” retrata a terapêutica praticada nas 33 aldeias de uma dessas terras indígenas que se estende pelo rio Jordão.

Além das informações sobre as plantas, a obra apresenta, por meio de relatos e desenhos, um pouco da cultura do povo Huni Kuĩ, como seus hábitos alimentares, suas músicas e suas concepções sobre doença e espiritualidade. Todo o conteúdo está escrito em “hatxa kuĩ” – língua falada nas aldeias do rio Jordão – e traduzido para o português.

De acordo com Quinet, foram feitas cinco viagens à região acreana para a realização da pesquisa, além de quatro períodos de residência de tradutores Huni Kuĩ no Rio de Janeiro.

“Realizamos uma oficina que durou 15 dias e reuniu os 22 pajés das aldeias do rio Jordão. Os capítulos do livro são, na verdade, transcrições literais dos temas abordados por eles, organizados dentro da sistemática indígena. Apenas sofreram revisões para facilitar a compreensão”, contou Quinet.

Das 351 espécies elencadas pelos pajés como medicinais, os pesquisadores coletaram 196 amostras – resultando na seleção das 109 plantas que integram o livro. O material botânico foi identificado de acordo com as técnicas taxonômicas e depositado no herbário do Instituto de Pesquisa do Jardim Botânico do Rio de Janeiro.

A partir do nome científico das espécies, os pesquisadores também levantaram na literatura científica, quando possível, o uso feito por outros povos do mundo. O trabalho contou com a colaboração de 21 taxonomistas de instituições brasileiras e internacionais.

“O objetivo inicial do pajé Ĩka Muru era criar um material de ensino para aprendizes de pajé, visando a facilitar a localização das plantas nos jardins medicinais. Mas o livro também tem o objetivo de difundir a cultura da tribo e a importância de preservar a floresta de forma ampla. Buscaram o Jardim Botânico para que esse conhecimento pudesse ser universalizado dentro de bases científicas”, disse Quinet.

Vitopaper

Para representar o conteúdo escrito em “hatxa kuĩ”, a editora Anna Dantes criou uma fonte tipográfica especial inspirada nas letras manuscritas nos cadernos indígenas. Também foi dela a ideia de produzir uma edição especial em papel sintético.

“Logo na primeira reunião feita com os pajés na floresta eu pude observar que os livros enviados para os povos indígenas sofrem muitos danos por causa da umidade e tornam-se muito perecíveis. As páginas vão entortando e grudando umas nas outras. Também são danificadas pela presença de pequenos animais, como cupins. Era um projeto muito ambicioso, que demandaria um grande esforço, e não poderíamos criar um produto que desapareceria em pouco tempo”, disse Dantes.

Dantes contou que foi a primeira vez que trabalhou com o Vitopaper, material produzido pela empresa Vitopel e originalmente desenvolvido por Sati Manrich, pesquisadora da Universidade Federal de São Carlos (UFSCar)

“Quando iniciamos o projeto, em 2003, nossa ideia era encontrar uma solução comum a dois grandes problemas: derrubada de árvores para a produção de celulose e a dificuldade de dar um destino adequado ao grande volume de lixo plástico produzido nos centros urbanos”, contou Manrich.

No processo desenvolvido por ela na universidade, o plástico oriundo de embalagens de alimentos, produtos de limpeza ou garrafas de água é higienizado e moído. Recebe, então, a adição de partículas minerais para a obtenção de propriedades ópticas – como brilho, brancura, contraste, dispersão e absorção de luz – e resistência mecânica ao rasgamento, tração e dobras.

A mistura é colocada em uma máquina extrusora a altas temperaturas, onde amolece e se funde. No final, o material transforma-se em uma folha grande fina, semelhante a um papel fabricado com celulose, que é enrolada e cortada de acordo com a aplicação.

Segundo a Vitopel, para cada tonelada de Vitopaper produzido são retiradas das ruas e lixões 750 quilos de resíduos plásticos. Além disso, segundo Manrich, cerca de 30 árvores deixam de ser derrubadas.

“Gostaria que esse exemplo fosse adotado não apenas para imortalizar o conhecimento dos índios como também na produção de livros didáticos, pois eles teriam uma durabilidade muito maior”, afirmou Manrich.

Una Isĩ Kayawa, Livro da Cura
Organizadores: Agostinho Manduca Mateus Ĩka Muru e Alexandre Quinet
Lançamento: 2014
Preço: R$ 120,00
Páginas: 260

Mais informações: www.facebook.com/UnaIsiKayawa?fref=ts.

Norte-americanos estão conscientes do impacto dos seus hábitos de consumo no meio ambiente (Akatu)

22/8/2014 – 03h51

por Redação do Akatu

sonhosite Norte americanos estão conscientes do impacto dos seus hábitos de consumo no meio ambientePesquisa revela que 70% dos norte-americanos assumem que são responsáveis por muitos dos problemas ambientais causados pelo consumo excessivo.

“Os norte-americanos são responsáveis por muitos dos problemas ambientais porque consomem mais recursos e produzem mais resíduos na comparação com outros países.” Com essa afirmação, concordaram 70% dos cidadãos norte-americanos entrevistados para o estudo “Novo Sonho Americano”. A pesquisa foi realizada pela PolicyInteractive para o Centro para um Novo Sonho Americano entre março e abril deste ano com 1.821 norte-americanos com mais de 18 anos de idade.

O estudo identificou que 85% dos norte-americanos estão conscientes de que mudanças substanciais no modo de vida são necessárias para proteger o meio ambiente. Eles também estão cientes da produção excessiva de resíduos no país: 91% concordaram que o seu modo de vida produz muitos resíduos.

A pesquisa, que investiga os anseios dos norte-americanos, abordou assuntos como economia, meio ambiente, publicidade e saúde, com as mesmas perguntas feitas para o estudo anterior, há 10 anos. As conclusões do levantamento indicam que o “sonho americano” segue mais o caminho da sustentabilidade do que o do consumismo: valorizam mais a liberdade pessoal, a oportunidade de explorar o seu potencial individual e a integração com a natureza. Além disso, 38% dos entrevistados tomou providências nos últimos cinco anos para diminuir a carga horária de trabalho, mesmo que isso acarretasse uma remuneração financeira mais baixa.

No Brasil, o Instituto Akatu identificou um crescimento da compreensão sobre sustentabilidade e do interesse por informações. O contingente de brasileiros que “ouviram falar” do termo sustentabilidade aumentou de 44% para 60% em dois anos, bem como o interesse de buscar informações sobre o tema (de 14% para 24%), revelou a pesquisa Pesquisa Akatu 2012: Rumo à Sociedade do Bem-Estar. Quando comparado a diversos outros, os dois únicos temas que tiveram expressivo crescimento no nível de interesse do consumidor foram justamente o da Responsabilidade Social Empresarial e o da Sustentabilidade: em 2010, ambos estavam em um patamar inferior a todos os demais e, em 2012, 24% apontaram seu interesse no tema Sustentabilidade e 25% em Responsabilidade Social Empresarial, praticamente ao mesmo nível de temas tradicionais, como Empresas/Negócios (26%) e Política (30%).

O levantamento conclui também que houve crescimento na adesão a práticas de consumo consciente no Brasil, ainda que, nesse momento, apenas de maneira eventual e não contínua. De 11 comportamentos considerados indicativos de consumo consciente, quando se adiciona aos consumidores que aderem “sempre” a esses comportamentos aqueles que aderem “às vezes”, oito comportamentos apresentaram aumento em relação a 2010, entre eles: planejar a compra de alimentos e roupas, desligar lâmpadas, fechar torneiras, usar o verso do papel, e ler rótulos dos produtos.

Esta tendência é reforçada por outro importante resultado da pesquisa feita pelo Akatu: solicitados a priorizar seus desejos, os entrevistados optaram, em uma significativa maioria, por soluções mais sustentáveis. Em cinco dos oito temas propostos (afetividade, alimentos, água, mobilidade, durabilidade, energia, resíduos e saúde), eles deram preferência a alternativas mais ligadas ao “caminho da sustentabilidade” do que as relacionadas ao “do consumismo”. Um exemplo é o tema da afetividade, que possui a maior diferença entre os consumidores que preferem o cenário mais sustentável (passar tempo com amigos e família – com índice de prioridade de 8,3 em uma escala de 0 a 10) ao invés do consumista (comprar presentes – com índice de 2,6). Vale destacar que a preferência pelo “caminho da sustentabilidade” ocorre em todas as classes sociais, faixas etárias e em todos os segmentos socioeconômicos e geográficos.

* Publicado originalmente no site Akatu.

A crise hídrica em São Paulo (Envolverde)

22/8/2014 – 04h28

por Heitor Scalambrini Costa*

represa cantareira 300x150 vale esta A crise hídrica em São Paulo

Contra fatos não há argumentos. O que acontece atualmente com relação ao desabastecimento de água em São Paulo se enquadra na retórica de que uma mentira repetida muitas vezes acaba virando verdade.

O governo paulista insiste em negar que se as obras necessárias tivessem sido realizadas poderia ser menos dramática a atual situação. E insiste ainda em responsabilizar São Pedro pelo caos evidente. A culpa não é da seca! A seca é parte do problema, pois desde sempre se soube que ela poderia vir.

Os gestores públicos também negam que existe racionamento, afirmando que o abastecimento de água está garantido até março de 2015, apesar de, na prática, o racionamento existir oficialmente em dezenas de municípios.

Em visita ao interior de São Paulo, no inicio de agosto [2014], pude constatar uma situação que ainda não tinha me dado conta. A gravidade da crise hídrica atinge não apenas a região metropolitana da capital, como a imprensa dá a entender ao enfatizar o colapso do sistema Cantareira, mas atinge todo o Estado mais rico da União.

Dos 645 municípios paulistas, a Sabesp (Companhia de Saneamento Básico de São Paulo) é responsável por fornecer água a 364, quem somam um total de 27,7 milhões de pessoas. Nos outros 281 municípios (não abastecidos pela Companhia), o abastecimento de água a 16 milhões de pessoas fica a cargo das próprias prefeituras ou de empresas por elas contratadas.

Se, por um lado, a companhia estadual de abastecimento nega haver adotado rodízio de água em qualquer um dos municípios atendidos por ela, inclusive na capital, tal afirmação é logo desmentida pelos usuários que relatam interrupções no abastecimento, principalmente à noite.

Nos municípios não atendidos pela Sabesp, medidas restritivas estão sendo tomadas por centenas de empresas e gestores locais devido à crise. Em Guarulhos, na grande São Paulo, o abastecimento de 1,3 milhões de moradores é atendido por um serviço municipal, o SAAE (Serviço Autônomo de Água e Esgoto), e seus moradores passam sem água um em cada dois dias.

Em 18 municípios, cerca de 2,1 milhões de pessoas estão submetidas ao racionamento oficial no estado de São Paulo, correspondendo a 5% da população total, segundo levantamento do jornal Folha de São Paulo (11/Ago). Além do racionamento, medidas de incentivo à economia de água têm sido adotadas, indo desde multas para reprimir o desperdício a campanhas com rifas de carro e TV para quem poupar e reduzir o consumo voluntariamente.

O que chama a atenção de todos, além da dimensão estadual da crise hídrica em São Paulo, é a insistência dos gestores em negar a existência do racionamento na área de atuação da Sabesp – mesmo contestados pelos moradores, que sofrem na prática com o rodízio provocado pela companhia, com cortes crescentes no fornecimento de água.

A contrapartida do poder é a ação responsável. E o governo paulista tem se mostrado irresponsável com o seu povo, além de incompetente e medíocre para resolver questões básicas para a sua população. É hora de assumir a gravidade da situação e dos erros cometidos, e, naturalmente, fazer as obras urgentes e necessárias para garantir o fornecimento seguro deste bem fundamental à vida.

Chega de hipocrisia, chega de culpar São Pedro que não pode se defender.

* Heitor Scalambrini Costa é professor Associado da Univ. Fed. de Pernambuco. Graduado em Física pela UNICAMP. Doutor em Energética na Univ. de Marselha/Comissariado de Energia Atômica-França.

** Publicado originalmente no site IHU On-Line.

“A que será que se destinam?” (Le Monde Diplomatique)

POVOS INDÍGENAS ISOLADOS E DE RECENTE CONTATO NO BRASIL

Após 26 anos, é possível celebrar a eficácia dos princípios do Sistema de Proteção ao Índio Isolado: o respeito à decisão dos povos de se manterem isolados e a autodeterminação dos grupos de recente contato. No entanto, dificuldades apontam para um colapso do sistema.

por Antenor Vaz

Avistamentos ou contatos com indígenas “isolados” na América do Sul têm sido notícia recorrente na imprensa internacional. Brasil, Equador, Peru, Colômbia, Bolívia, Paraguai e Venezuela abrigam mais de duas centenas de referências sobre a presença de grupos indígenas isolados e/ou recém-contatados.

O Brasil voltou a ser notícia quando um grupo de sete indígenas isolados decidiu contatar os ashaninka da aldeia Simpatia (localizada na Terra Indígena Kampa/Isolados, no Alto Rio Envira, Acre, uma região de fronteira do Brasil com o Peru). Um grupo de isolados, na manhã do dia 11 de junho, tentou comunicação verbal, mas não foi compreendido pelos ashaninka. Por meio de gestos, solicitavam roupas e objetos industrializados – facões, panelas, entre outros. Faz cerca de três anos que esses “indígenas não contatados” são avistados próximo das aldeias dos ashaninka em busca de objetos industrializados e produtos das roças.

Esse fato desperta curiosidade acerca do então grupo isolado, mas também suscita outras questões: existem outros grupos indígenas isolados no território nacional? Quantos são? O que ocorre com esses grupos após o contato efetivado? Existem políticas públicas dirigidas a esses povos? Como o Estado brasileiro concebe essa questão e quais são os instrumentos de “proteção” para eles?

 

Povos indígenas isolados

Cerca de 90% dos povos indígenas isolados que restam no planeta vivem em sete países da bacia amazônica e chaco paraguaio, em florestas onde os ciclos ecossistêmicos e a biodiversidade se encontram preservados. Esses povos mantêm-se em isolamento como defesa de um contato que se mostrou destruidor, seja por conflitos com o “branco” ou com outros povos indígenas. A decisão de isolamento é manifestada por atos de ameaça dirigidos a invasores, mas principalmente pela fuga sistemática em direção a territórios cada vez mais distantes das frentes de expansão da “civilização” – territórios escassos e submetidos à avidez que cobiça cada centímetro de terra para a completa conversão da “natureza” em “recursos naturais”.

Para o Estado brasileiro, a definição de “índios isolados” ainda é a do Estatuto do Índio (1973): “quando vivem em grupos desconhecidos ou de que se possuem poucos e vagos informes através de contatos eventuais com elementos da comunhão nacional”. Grupos indígenas de recente contato, para a Funai, são “grupos que mantêm relações de contato permanente e/ou intermitente com segmentos da sociedade nacional e que, independentemente do tempo de contato, apresentam singularidades em sua relação com a sociedade nacional e seletividade (autonomia) na incorporação de bens e serviços”.

Com a Constituição de 1988, a Funai instituiu a política específica de proteção aos índios isolados, calcada na “premissa do não contato” enquanto “prerrogativa da autodeterminação” desses povos. E, em 2009, reconheceu a necessidade de conceber políticas diferenciadas para os grupos de recente contato. A despeito de iniciativas abnegadas de servidores, da sociedade civil organizada e de indivíduos isolados, essas políticas tendem a ser pouco efetivas diante do sucateamento e do desprestígio do órgão indigenista oficial perante as demais políticas de governo. Mas, afinal, qual é a política de Estado para os povos indígenas isolados e de recente contato no Brasil de hoje? Para responder, é necessário retroceder na história.

 

A política indigenista da Colônia à República

A política indigenista na Colônia, Império e República Velha no Brasil levava a marca do tráfico indígena e negreiro e dos conflitos entre as oligarquias locais, secundadas pelas vagas de imigração europeia. Nesse contexto, “a questão indígena” transitou de umaquestão de mão de obrapara uma questão de terras.1 O debate girava entre exterminar os índios “bravos” ou civilizá-los. Para efeito prático/administrativo, até o século XIX os índios se subdividiam em “bravos” e “domésticos ou mansos”. Os “bravos”, não se submetendo aos aldeamentos e, consequentemente, às leis, eram perseguidos e exterminados. Essas duas concepções povoam o imaginário da população brasileira. A criação, em 1910, do Serviço de Proteção ao Índio e Localização dos Trabalhadores Nacionais (SPILTN), renomeada em 1918 como Serviço de Proteção ao Índio (SPI), racionalizou a incorporação dos territórios e das populações indígenas à sociedade brasileira na Primeira República. O principal articulador desse projeto foi o Marechal Rondon, que aplicava um sistema militar de defesa da integridade territorial no país. Para o SPI, cabia à República resgatar as populações indígenas do extermínio. O símbolo da nova orientação foi a substituição da palavra “catequese” pelo termo “proteção”. De maneira geral, podemos afirmar que a política indigenista do Estado resumia-se à política de atração/pacificação como premissa de proteção, fomentando a passagem dos índios a trabalhadores agrícolas, levando ao extermínio físico e aniquilação cultural dessas sociedades, e servindo à integração dos territórios indígenas à sociedade brasileira.

 

Funai – “Contato” enquanto paradigma de “proteção”

Em 1967, em meio a denúncias de corrupção no SPI, foi instaurada uma Comissão de Inquérito no órgão. O Relatório Final,2 publicado em 1968, entre outras conclusões, determinou a demissão e a suspensão de duas dezenas de servidores. Nesse mesmo ano e num contexto de reorganização burocrática do Estado, os militares extinguiram o SPI e criaram a Funai. No que se refere aos índios isolados, mantiveram-se os princípios do contato/atração enquanto norteadores da proteção.

 

Proteção dos índios isolados no contexto da redemocratização

Em 1987, a Funai criou a Coordenadoria de Índios Arredios, atribuindo-lhe a competência de coordenar as ações relativas à atração e ao contato com grupos indígenas “arredios”. Naquele mesmo ano, coordenado pelo sertanista Sydney Possuelo, ocorreu o I Encontro de Sertanistas, que teve como finalidade a“análise da política de atração dos grupos indígenas arredios, visando definir uma nova postura da Funai”.Esse evento tornou-se um marco divisor, uma vez que formulou a mudança do paradigma do “contato” para o “não contato” enquanto premissa para a proteção dos isolados. E, ainda em 1987, a Funai introduziu a Coordenadoria de Índios Isolados(CII),3 estabeleceu diretrizes e criou o Sistema de Proteção ao Índio Isolado (SPII). Tendo como referência a Constituição de 1988 e o princípio da autodeterminação dos povos, a Funai definiu como uma de suas diretrizes garantir “aos índios e grupos isolados o direito de assim permanecerem, mantendo a integridade de seu território, intervindo apenas quando qualquer fator coloque em risco a sua sobrevivência e organização sociocultural”.A experiência inovadora desenvolvida pela Equipe de Localização dos Índios Isolados da Reserva Biológica do Guaporé, entre 1989 e 1994, resultou na primeira terra indígena demarcada exclusivamente para um grupo isolado, sem se estabelecer o contato.

 

O Sistema de Proteção para Índios Isolados e de Recente Contato

O SPII, concebido originalmente em 1987, é a estrutura administrativa destinada à proteção física, patrimonial e cultural dos grupos indígenas isolados. Em 2007, após duas décadas de experiência, formulou-se o Sistema de Proteção e Promoção de Direitos para Índios Isolados e de Recente Contato (SPIIRC),subdividido em quatro subsistemas: 1) Gestão (Planejamento, Administrativo, Sistematização, Comunicação e Capacitação); 2) Proteção (Localização, Monitoramento e Vigilância); 3) Promoção de direitos (Processos Educativos e Intercâmbio, Educação Etnoambiental e Saúde); e 4) Contato. As ações deproteção, promoção de direitos e contato são desenvolvidas por equipes denominadas Frentes de Proteção Etnoambientais (FPEs). Nesse sistema, o contato pode ser estabelecido por decisão do grupo indígena isolado, por estranhos, ou pela Funai quando se caracteriza perigo eminente de extinção.

Em 2003, com a definição de um novo estatuto para a Funai, criou-se a Coordenação Geral de Povos Indígenas Recém-Contatados,mas o tema relacionado aos grupos de recente contato só voltou à discussão em 2007 e se institucionalizou com a reestruturação da Funai entre 2009 e 2012. A partir das práticas desenvolvidas com os grupos indígenas de recente contato (zo’é, korubo, akuntsu, kanoé, piripikura, awa guajá, entre outros), surgiu a necessidade de repensar as ações instituídas. Em 2010, deu-se início à concepção de programas nos quais se priorizaram a promoção sociocultural e a proteção física e territorial desses povos sujeitos a extrema vulnerabilidade, que resultariam na formulação da Política para Povos Indígenas de Recente Contato. Até hoje a Funai ainda não publicou portaria instituindo essa política pública.

 

Quantos e onde estão os índios isolados e de recente contato no Brasil?

Em 1988, o sertanista Wellington Figueiredo mapeou os grupos indígenas isolados no Brasil, relacionando 88 localizações com possível presença de grupos de isolados. A cada uma dessas localizações atribuiu-se a nomenclatura de “referência”.

As últimas atualizações realizadas pela Funai indicam 104 registros de índios isolados e dezesseis de grupos considerados de recente contato no Brasil (veja tabela).

 

Cenário atual: duas décadas do SPII

Após 26 anos de execução do Sistema de Proteção ao Índio Isolado é possível celebrar a eficácia de seus princípios: o respeito à decisão dos povos de se manterem isolados e a autodeterminação dos grupos de recente contato. No entanto, dificuldades de ordem conjuntural e estrutural apontam para um colapso do SPIIRC. O aumento da pressão da frente expansionista/desenvolvimentista pelos territórios ocupados por índios isolados e de recente contato, inclusive nas regiões de divisas internacionais, a falta de apoio político ou omissão dos poderes constituídos, o aumento das ações proselitistas missionárias, as atividades econômicas ilegais, os empreendimentos de grande impacto derivados de políticas e programas de governo, e os empreendimentos privados levarão os grupos isolados a procurar contato como única forma de sobrevivência. Dessa forma, a política estatal de “não contato” vai configurando-se como mera ficção retórica.

 

Desafios da política para índios isolados e de recente contato

Neste passeio pela atuação do Estado, o termo “proteção” assumiu conotações e práticas distintas, a depender do momento político/econômico: a “proteção” enquanto pacificação/contato, com a finalidade de incorporar os indígenas à civilização (Rondon); contato na perspectiva do protecionismo com “aculturação” lenta e controlada dos indígenas (irmãos Villas Bôas); contato na perspectiva do integracionismo ao mercado regional (Francisco Meirelles).4 Todas elas sob o guarda-chuva do contato enquanto prerrogativa da “proteção”. Já na Nova República, com a decisão tomada na Reunião dos Sertanistas (1987) no âmbito do processo constituinte, a política para grupos isolados mudou radicalmente ao adotar o “não contato” enquanto premissa de proteção e a consequente introdução do Sistema de Proteção ao Índio Isolado na perspectiva da proteção territorial (Sydney Possuelo). Atualmente, a Funai/Coordenação Geral de Índios Isolados e Recém-Contatados (CGIIRC) tenta dar continuidade à política de proteção na prerrogativa do não contato; as políticas de governo em curso, porém, não acenam com a mesma postura. Com essa compreensão, observa-se um paradoxo entre a finalidade de “proteção” para a qual a Funai fora criada e sua relação com o Poder Executivo, a quem é subordinada, quando este coloca em prática políticas que impactam os grupos isolados e de recente contato.

É necessário e urgente que a Funai, em cooperação com a sociedade civil, resgate sua atribuição constitucional de proteger e promover os direitos indígenas, incluindo os grupos isolados e de recente contato de modo que seus técnicos e dirigentes não se desviem dessa atribuição. O que se observa hoje é um volume grande de “tarefas administrativas” sendo exercidas por sertanistas/coordenadores de FPEs e seus auxiliares, impossibilitando-os de atuarem nos trabalhos de proteção in situ, que lhes competem.

Como já dito no início: as políticas indigenistas subordinam-se aos planos de defesa nacional, construção de estradas e hidrelétricas, expansão da agropecuária/agronegócio e extração de minérios. No papel, as mais nobres intenções valem, mas, de fato, as disputas em torno da questão indígena, desde o tempo colonial, têm como cenário de fundo o ordenamento territorial e os recursos naturais. E, para a Funai, resta mitigar os efeitos de uma política da qual é refém. Esse contexto reproduz-se na maioria dos países sul-americanos com presença de índios isolados e de recente contato. No entanto, como na Colômbia, as políticas e metodologias de proteção têm tido avanços consideráveis. E a Funai, refém das políticas desenvolvimentistas, como a seleção brasileira de futebol, vai perdendo seu lugar de protagonista no campo da proteção para índios isolados e de recente contato. 

Antenor Vaz

*Antenor Vaz é físico, educador e sertanista. Especialista em laboratórios didáticos de física, trabalhou nas áreas de educação popular, metodologias de trabalhos com jovens e gestão de projetos sociais. Sua maior experiência na área social deu-se com educação indígena e coordenação de trabalhos de localização de grupos indígenas isolados na Amazônia brasileira. Pôs em prática a Política para Índios Isolados na região amazônica, o que possibilitou a criação da primeira Terra Indígena (T.I. Massaco) exclusiva para índios sem contato reconhecida pelo governo brasileiro. É membro do Comitê Consultivo Internacional para Assuntos de Índios Isolados e em Contato Inicial. Foi coordenador de políticas para índios de recente contato na Coordenação Geral de Índios Isolados e de Recente Contato (CGIIRC) da Funai até março de 2013.

Ilustração: Daniel Kondo

1  Manoela Carneiro da Cunha, “Política indigenista no século XIX”. In: História dos índios no Brasil, Companhia da Terra/Secretaria Municipal de Cultura/Fapesp, São Paulo, 1992.

2  Esse relatório tornou-se nacionalmente conhecido como “Relatório Figueiredo” e ficou desaparecido por mais de quarenta anos. Recentemente foi localizado nos arquivos do Museu do Índio, no Rio de Janeiro. O relatório denuncia não só os casos de corrupção do SPI, mas também todo o processo de repressão e barbárie exercido pelo Estado contra os indígenas.

3  A CII ao longo dos anos alterou sua nomenclatura e seus objetivos. Em 2012, foi publicada a última alteração por meio do Decreto n. 7.778, de 27 de julho de 2012, no qual passa a se chamar Coordenação Geral de Índios Isolados e Recém-Contatados (CGIIRC), subordinada à Diretoria de Proteção Territorial, e que traz em sua nova configuração o trabalho com os índios recém-contatados.

4  Carlos Augusto da Rocha Freire, Sagas sertanistas: práticas e representações do campo indigenista no século XX, tese de doutorado do Programa de Pós-Graduação em Antropologia Social do Museu Nacional da Universidade Federal do Rio de Janeiro (UFRJ), 2005.

04 de Agosto de 2014

Alternate mechanism of species formation picks up support, thanks to a South American ant (University of Rochester )

21-Aug-2014

 

By Peter Iglinski

A queen ant of the host species Mycocepurus goeldii.

A newly-discovered species of ant supports a controversial theory of species formation. The ant, only found in a single patch of eucalyptus trees on the São Paulo State University campus in Brazil, branched off from its original species while living in the same colony, something thought rare in current models of evolutionary development.

“Most new species come about in geographic isolation,” said Christian Rabeling, assistant professor of biology at the University of Rochester. “We now have evidence that speciation can take place within a single colony.”

The findings by Rabeling and the research team were published today in the journal Current Biology.

In discovering the parasitic Mycocepurus castrator, Rabeling and his colleagues uncovered an example of a still-controversial theory known as sympatric speciation, which occurs when a new species develops while sharing the same geographic area with its parent species, yet reproducing on its own.“While sympatric speciation is more difficult to prove,” said Rabeling, “we believe we are in the process of actually documenting a particular kind of evolution-in-progress.”

New species are formed when its members are no longer able to reproduce with members of the parent species. The commonly-accepted mechanism is called allopatric speciation, in which geographic barriers—such as mountains—separate members of a group, causing them to evolve independently.

“Since Darwin’s Origin of Species, evolutionary biologists have long debated whether two species can evolve from a common ancestor without being geographically isolated from each other,” said Ted Schultz, curator of ants at the Smithsonian’s National Museum of Natural History and co-author of the study. “With this study, we offer a compelling case for sympatric evolution that will open new conversations in the debate about speciation in these ants, social insects and evolutionary biology more generally.”

A queen ant of the parasitic species Mycocepurus castrator.

M. castrator is not simply another ant in the colony; it’s a parasite that lives with—and off of—its host, Mycocepurus goeldii. The host is a fungus-growing ant that cultivates fungus for its nutritional value, both for itself and, indirectly, for its parasite, which does not participate in the work of growing the fungus garden. That led the researchers to study the genetic relationships of all fungus-growing ants in South America, including all five known and six newly discovered species of the genus Mycocepurus, to determine whether the parasite did evolve from its presumed host. They found that the parasitic ants were, indeed, genetically very close to M. goeldii, but not to the other ant species.

They also determined that the parasitic ants were no longer reproductively compatible with the host ants—making them a unique species—and had stopped reproducing with their host a mere 37,000 years ago—a very short period on the evolutionary scale.

A big clue for the research team was found by comparing the ants’ genes, both in the cell’s nucleus as well as in the mitochondria—the energy-producing structures in the cells. Genes are made of units called nucleotides, and Rabeling found that the sequencing of those nucleotides in the mitochondria is beginning to look different from what is found in the host ants, but that the genes in the nucleus still have traces of the relationship between host and parasite, leading him to conclude that M. castrator has begun to evolve away from its host.

Rabeling explained that just comparing some nuclear and mitochondrial genes may not be enough to demonstrate that the parasitic ants are a completely new species. “We are now sequencing the entire mitochondrial and nuclear genomes of these parasitic ants and their host in an effort to confirm speciation and the underlying genetic mechanism.”

The parasitic ants need to exercise discretion because taking advantage of the host species is considered taboo in ant society. Offending ants have been known to be killed by worker mobs. As a result, the parasitic queen of the new species has evolved into a smaller size, making them difficult to distinguish from a host worker.

Host queens and males reproduce in an aerial ceremony, in the wet tropics only during a particular season when it begins to rain. Rabeling found that the parasitic queens and males, needing to be more discreet about their reproductive activities, diverge from the host’s mating pattern. By needing to hide their parasitic identity, M. castrator males and females lost their special adaptations that allowed them to reproduce in flight, and mate inside the host nest, making it impossible for them to sexually interact with their host species.

The research team included Ted Schultz of the Smithsonian Institution’s National Museum of Natural History, Naomi Pierce of Harvard University, and Maurício Bacci, Jr of the Center for the Study of Social Insects (São State University, Rio Claro, Brazil).

The Climate Swerve (The New York Times)

CreditRobert Frank Hunter

 

AMERICANS appear to be undergoing a significant psychological shift in our relation to global warming. I call this shift a climate “swerve,” borrowing the term used recently by the Harvard humanities professor Stephen Greenblatt to describe a major historical change in consciousness that is neither predictable nor orderly.

The first thing to say about this swerve is that we are far from clear about just what it is and how it might work. But we can make some beginning observations which suggest, in Bob Dylan’s words, that “something is happening here, but you don’t know what it is.” Experience, economics and ethics are coalescing in new and important ways. Each can be examined as a continuation of my work comparing nuclear and climate threats.

The experiential part has to do with a drumbeat of climate-related disasters around the world, all actively reported by the news media: hurricanes and tornadoes, droughts and wildfires, extreme heat waves and equally extreme cold, rising sea levels and floods. Even when people have doubts about the causal relationship of global warming to these episodes, they cannot help being psychologically affected. Of great importance is the growing recognition that the danger encompasses the entire earth and its inhabitants. We are all vulnerable.

This sense of the climate threat is represented in public opinion polls and attitude studies. A recent Yale survey, for instance, concluded that “Americans’ certainty that the earth is warming has increased over the past three years,” and “those who think global warming is not happening have become substantially less sure of their position.”

Falsification and denial, while still all too extensive, have come to require more defensive psychic energy and political chicanery.

But polls don’t fully capture the complex collective process occurring.

The most important experiential change has to do with global warming and time. Responding to the climate threat — in contrast to the nuclear threat, whose immediate and grotesque destructiveness was recorded in Hiroshima and Nagasaki — has been inhibited by the difficulty of imagining catastrophic future events. But climate-related disasters and intense media images are hitting us now, and providing partial models for a devastating climate future.

At the same time, economic concerns about fossil fuels have raised the issue of value. There is a wonderfully evocative term, “stranded assets,” to characterize the oil, coal and gas reserves that are still in the ground. Trillions of dollars in assets could remain “stranded” there. If we are serious about reducing greenhouse gas emissions and sustaining the human habitat, between 60 percent and 80 percent of those assets must remain in the ground, according to the Carbon Tracker Initiative, an organization that analyzes carbon investment risk. In contrast, renewable energy sources, which only recently have achieved the status of big business, are taking on increasing value, in terms of returns for investors, long-term energy savings and relative harmlessness to surrounding communities.

Pragmatic institutions like insurance companies and the American military have been confronting the consequences of climate change for some time. But now, a number of leading financial authorities are raising questions about the viability of the holdings of giant carbon-based fuel corporations. In a world fueled by oil and coal, it is a truly stunning event when investors are warned that the market may end up devaluing those assets. We are beginning to see a bandwagon effect in which the overall viability of fossil-fuel economics is being questioned.

Can we continue to value, and thereby make use of, the very materials most deeply implicated in what could be the demise of the human habitat? It is a bit like the old Jack Benny joke, in which an armed robber offers a choice, “Your money or your life!” And Benny responds, “I’m thinking it over.” We are beginning to “think over” such choices on a larger scale.

This takes us to the swerve-related significance of ethics. Our reflections on stranded assets reveal our deepest contradictions. Oil and coal company executives focus on the maximum use of their product in order to serve the interests of shareholders, rather than the humane, universal ethics we require to protect the earth. We may well speak of those shareholder-dominated principles as “stranded ethics,” which are better left buried but at present are all too active above ground.

Such ethical contradictions are by no means entirely new in historical experience. Consider the scientists, engineers and strategists in the United States and the Soviet Union who understood their duty as creating, and possibly using, nuclear weapons that could destroy much of the earth. Their conscience could be bound up with a frequently amorphous ethic of “national security.” Over the course of my work I have come to the realization that it is very difficult to endanger or kill large numbers of people except with a claim to virtue.

The climate swerve is mostly a matter of deepening awareness. When exploring the nuclear threat I distinguished between fragmentary awareness, consisting of images that come and go but remain tangential, and formed awareness, which is more structured, part of a narrative that can be the basis for individual and collective action.

In the 1980s there was a profound worldwide shift from fragmentary awareness to formed awareness in response to the potential for a nuclear holocaust. Millions of people were affected by that “nuclear swerve.” And even if it is diminished today, the nuclear swerve could well have helped prevent the use of nuclear weapons.

With both the nuclear and climate threats, the swerve in awareness has had a crucial ethical component. People came to feel that it was deeply wrong, perhaps evil, to engage in nuclear war, and are coming to an awareness that it is deeply wrong, perhaps evil, to destroy our habitat and create a legacy of suffering for our children and grandchildren.

Social movements in general are energized by this kind of ethical passion, which enables people to experience the more active knowledge associated with formed awareness. That was the case in the movement against nuclear weapons. Emotions related to individual conscience were pooled into a shared narrative by enormous numbers of people.

In earlier movements there needed to be an overall theme, even a phrase, that could rally people of highly divergent political and intellectual backgrounds. The idea of a “nuclear freeze” mobilized millions of people with the simple and clear demand that the United States and the Soviet Union freeze the testing, production and deployment of nuclear weapons.

Could the climate swerve come to include a “climate freeze,” defined by a transnational demand for cutting back on carbon emissions in steps that could be systematically outlined?

With or without such a rallying phrase, the climate swerve provides no guarantees of more reasonable collective behavior. But with human energies that are experiential, economic and ethical it could at least provide — and may already be providing — the psychological substrate for action on behalf of our vulnerable habitat and the human future.

Evangelical Declaration on Global Warming (Cornwall Alliance)

By

May 1, 2009

PREAMBLE

As governments consider policies to fight alleged man-made global warming, evangelical leaders have a responsibility to be well informed, and then to speak out. A Renewed Call to Truth, Prudence, and Protection of the Poor: An Evangelical Examination of the Theology, Science, and Economics of Global Warming demonstrates that many of these proposed policies would destroy jobs and impose trillions of dollars in costs to achieve no net benefits. They could be implemented only by enormous and dangerous expansion of government control over private life. Worst of all, by raising energy prices and hindering economic development, they would slow or stop the rise of the world’s poor out of poverty and so condemn millions to premature death.

WHAT WE BELIEVE

  1. We believe Earth and its ecosystems—created by God’s intelligent design and infinite power and sustained by His faithful providence —are robust, resilient, self-regulating, and self-correcting, admirably suited for human flourishing, and displaying His glory.  Earth’s climate system is no exception. Recent global warming is one of many natural cycles of warming and cooling in geologic history.
  2. We believe abundant, affordable energy is indispensable to human flourishing, particularly to societies which are rising out of abject poverty and the high rates of disease and premature death that accompany it. With present technologies, fossil and nuclear fuels are indispensable if energy is to be abundant and affordable.
  3. We believe mandatory reductions in carbon dioxide and other greenhouse gas emissions, achievable mainly by greatly reduced use of fossil fuels, will greatly increase the price of energy and harm economies.
  4. We believe such policies will harm the poor more than others because the poor spend a higher percentage of their income on energy and desperately need economic growth to rise out of poverty and overcome its miseries.

WHAT WE DENY

  1. We deny that Earth and its ecosystems are the fragile and unstable products of chance, and particularly that Earth’s climate system is vulnerable to dangerous alteration because of minuscule changes in atmospheric chemistry. Recent warming was neither abnormally large nor abnormally rapid. There is no convincing scientific evidence that human contribution to greenhouse gases is causing dangerous global warming.
  2. We deny that alternative, renewable fuels can, with present or near-term technology, replace fossil and nuclear fuels, either wholly or in significant part, to provide the abundant, affordable energy necessary to sustain prosperous economies or overcome poverty.
  3. We deny that carbon dioxide—essential to all plant growth—is a pollutant. Reducing greenhouse gases cannot achieve significant reductions in future global temperatures, and the costs of the policies would far exceed the benefits.
  4. We deny that such policies, which amount to a regressive tax, comply with the Biblical requirement of protecting the poor from harm and oppression.

A CALL TO ACTION

In light of these facts,

  1. We call on our fellow Christians to practice creation stewardship out of Biblical conviction, adoration for our Creator, and love for our fellow man—especially the poor.
  2. We call on Christian leaders to understand the truth about climate change and embrace Biblical thinking, sound science, and careful economic analysis in creation stewardship.
  3. We call on political leaders to adopt policies that protect human liberty, make energy more affordable, and free the poor to rise out of poverty, while abandoning fruitless, indeed harmful policies to control global temperature.

– See more at: http://www.cornwallalliance.org/2009/05/01/evangelical-declaration-on-global-warming/#sthash.BAbK7cNe.dpuf

Global Warming Deniers Are Growing More Desperate by the Day (Moyers & Co.)

August 6, 2014

Fox News aired a report by the Heartland Institute purporting to "debunk" a top climate change report while obscuring the background of the organization, which previously denied the science demonstrating the dangers of tobacco and secondhand smoke. (Image: Media Matters)

Fox News aired a report by the Heartland Institute purporting to “debunk” a top climate change report while obscuring the background of the organization, which previously denied the dangers of tobacco. (Image: Media Matters)

This post originally appeared at Desmogblog.

The Heartland Institute’s recent International Climate Change Conference in Las Vegas illustrates climate change deniers’ desperate confusion. AsBloomberg News noted, “Heartland’s strategy seemed to be to throw many theories at the wall and see what stuck.” A who’s who of fossil fuel industry supporters and anti-science shills variously argued that global warming is a myth; that it’s happening but natural — a result of the sun or “Pacific Decadal Oscillation”; that it’s happening but we shouldn’t worry about it; or that global cooling is the real problem.

The only common thread, Bloomberg reported, was the preponderance of attacks on and jokes about Al Gore: “It rarely took more than a minute or two before one punctuated the swirl of opaque and occasionally conflicting scientific theories.”

Personal attacks are common among deniers. Their lies are continually debunked, leaving them with no rational challenge to overwhelming scientific evidence that the world is warming and that humans are largely responsible. Comments under my columns about global warming include endless repetition of falsehoods like “there’s been no warming for 18 years,” “it’s the sun,” and references to “communist misanthropes,” “libtard warmers,” and worse…

Far worse. Katharine Hayhoe, director of Texas Tech’s Climate Science Center and an evangelical Christian, had her email inbox flooded with hate mail and threats after conservative pundit Rush Limbaugh denounced her, and right-wing blogger Mark Morano published her email address. “I got an email the other day so obscene I had to file a police report,” Hayhoe said in an interview on the Responding to Climate Change website. “They mentioned my child. It had all kinds of sexual perversions in it — it just makes your skin crawl.”

One email chastised her for taking “a man’s job” and called for her public execution, finishing with, “If you have a child, then women in the future will be even more leery of lying to get ahead, when they see your baby crying next to the basket next to the guillotine.”

Many attacks came from fellow Christians unable to accept that humans can affect “God’s creation.” That’s a belief held even by a few well-known scientists and others held up as climate experts, including Roy Spencer, David Legates and Canadian economist Ross McKitrick. They’ve signed the Cornwall Alliance’s Evangelical Declaration on Global Warming, which says, “We believe Earth and its ecosystems — created by God’s intelligent design and infinite power and sustained by His faithful providence — are robust, resilient, self-regulating, and self-correcting, admirably suited for human flourishing, and displaying His glory. Earth’s climate system is no exception.” This worldview predetermines their approach to the science.

Lest you think nasty, irrational comments are exclusively from fringe elements, remember the gathering place for most deniers, the Heartland Institute, has compared those who accept the evidence for human-caused climate change to terrorists. Similar language was used to describe the US Environmental Protection Agency in a full-page ad in USA Today and Politico from the Environmental Policy Alliance, a front group set up by the PR firm Berman and Company, which has attacked environmentalists, labor-rights advocates, health organizations — even Mothers Against Drunk Driving and the Humane Society — on behalf of funders and clients including Monsanto, Wendy’s and tobacco giant Phillip Morris. The terrorism meme was later picked up by Pennsylvania Republican congressman Mike Kelly.

David Suzuki: The War on Climate Scientists

 

Fortunately, most people don’t buy irrational attempts to disavow science. A Forum Research poll found 81 percent of Canadians accept the reality of global warming, and 58 per cent agree it’s mostly human-caused. An Ipsos MORI poll found that, although the US has a higher number of climate change deniers than 20 countries surveyed, 54 per cent of Americans believe in human-caused climate change. (Research also shows climate change denial is most prevalent in English-speaking countries, especially in areas “served” by media outlets owned by Rupert Murdoch, who rejects climate science.)

It’s time to shift attention from those who sow doubt and confusion, either out of ignorance or misanthropic greed, to those who want to address a real, serious problem. The BBC has the right idea, instructing its reporters to improve accuracy by giving less air time to people with anti-science views, including climate change deniers.

Solutions exist, but every delay makes them more difficult and costly.

Written with contributions from David Suzuki Foundation Senior Editor Ian Hanington.

The views expressed in this post are the author’s alone, and presented here to offer a variety of perspectives to our readers.

 
David Suzuki, co-Founder of the David Suzuki Foundation, is an award-winning scientist, environmentalist and broadcaster.

Anthropology and Christianity (Oxford University Press’s Blog)

BY TIMOTHY LARSEN

AUGUST 13TH 2014

The relationship between anthropologists and Christian identity and belief is a riddle. I first became interested in it by studying the intellectual reasons for the loss of faith given by figures in the nineteenth and twentieth centuries. There are an obvious set of such intellectual triggers.

They were influenced by David Hume or Tom Paine, for example. Or, surprisingly often, it was modern biblical criticism. The big intellectual guns, of course, were figures such as Darwin, Marx, and Freud (and perhaps we can also squeeze Nietzsche in as a kind of d’Artagnan alongside those Three Musketeers). The so-called acids of modernity eat away at traditional religious claims.

As I accumulated and analyzed actual life stories, however, I hit one such trigger that had not been explored by scholars: the discipline of anthropology. It is not hard to find studies – sometimes daunting heaps of them – on Christianity and evolution or Christianity and Marxism and so on, but it was not clear to me what anthropology had to offer that was so unsettling to Christianity. Nor could I find where to go to read about it. Then there was the self-reporting of anthropologists. I’m a historian so I was coming at the discipline as an outsider. Every anthropologist I talked to, however, confidently told me that anthropology was and always had been from its very beginning a discipline that was dominated by scepticism and the rejection of faith.

Many were quite willing to go so far as to call it anti-Christian in ethos. They reported this whether they themselves were personally religious or hostile to religion – whether they self-identified as Catholic, evangelical, liberal Protestant, Jewish, secular, or atheist. If my random encounters were not profoundly unrepresentative, it seemed to be a consensus opinion. And it was not hard to find printed sources that also offered this assessment emphatically.

But then something strange began to happen. As I had shown interested in the relationship between anthropology and Christianity, my informants (to use an anthropological category!) would also casually mention as a kind of irrelevant, quirky novelty that a certain leading anthropologist was a Christian.

Kryst1Gurlo

“Of course, dear old Mary Douglas was a devout Catholic, you know.” Purity and Danger Mary Douglas? One of the most influential anthropologists theorists of the second half of the twentieth century – no, I didn’t know. “Curiously, Margaret Mead, to the bemusement of her parents, chose to become an Episcopalian in her teens and was an active churchwomen for the rest of her life, even serving on the Commission on Church and Society of the World Council of Churches.” Coming of Age in Samoa Margaret Mead? One of the most prominent public intellectuals of twentieth-century America? That is curious.

“Strange to say, Victor Turner, who had been an agnostic Marxist, converted to Catholicism as an adult.” Really? The anthropologist who got us all talking about liminality and rites of passage and so on? The theorist behind the work of whole generations and departments of anthropology? Curiouser and curiouser.

“Oh, Catholic converts interest you? Well, of course, the presiding genius of the golden age of Oxford anthropology, E. E. Evans-Pritchard was one, as was Godfrey Lienhardt, and David Pocock, and . . .”

“What is that you say? What about Protestants? Well, Robertson Smith was an ordained minister in the Free Church of Scotland. Another fun fact was that the Primitive Methodist missionary Edwin W. Smith became the president of the Royal Anthropological Institute.” And so it went on.

What is one to make of the strong perception that anthropologists are hostile to religion with the reality of all these Christian anthropologists hiding in plain sight? The answer to such a question would no doubt be a complicated one with multiple, entangled factors. One of them, however, clearly relates to changing attitudes over time regarding the intellectual integrity and beliefs of people in traditional cultures.

Early anthropologists who rejected Christian faith such as E. B. Tylor (often called the father of anthropology) and James Frazer (of Golden Bough fame) were convinced that so-called “primitive” people had not yet reached a stage of progress in which they could be rational and logical. These pioneering anthropologists saw Christianity as a vehicle that was perniciously carrying into the modern world the superstitious, irrational ways of thinking of “savages”.

As the twentieth century unfolded, however, anthropologists learned to reject such condescending assumptions about traditional cultures. As they came to respect the people they studied, they often decided that their religious life and beliefs also had their own integrity and merit. This sometimes led them to reevaluate faith more generally—and even more personally. This connection is particularly strong in the life and work of E. E. Evans-Pritchard. He was both an adult convert to Catholicism and a major, highly influential champion of the notion that peoples such as the Azande were not “pre-logical” but rather deeply rational.

Victor and Edith Turner went into the field as committed Marxists and agnostics with a touch of bitterness in their anti-Christian stance (Edith had been raised by judgmental evangelical missionary parents). The Turners’ dawning conviction that Ndembu rituals had an irreducible spiritual reality, however, ultimately led them to receive the Christian faith as spiritually efficacious, true, and as their own spiritual home.

When anthropologists today glory in their discipline’s rejection of faith they often have in mind a very specific form of belief: a highly judgmental, narrowly sectarian version of religious commitment that condemns the indigenous people they study as totally cut off from any positive, authentic spiritual knowledge and experience. Evans-Pritchard and Victor Turner, however, are typical of numerous Christian anthropologists who were convinced that the traditional African cultures they studied possessed a natural revelation of God.

The riddle of anthropologists and the Christian faith is at least partially solved by distinguishing between “the wrong kind of faith”—the rejecting of which is a standard trope in the discipline—from an ethnographic openness to spirituality which can surprisingly often find expression in Christian forms for individual theorists.

 

Image credit: Old Christian Cross, by Filipov Ivo. CC-BY-SA-3.0 via Wikimedia Commons.

Polícia enfrenta ‘demônios’ em grupo evangélico; conheça os PMs de Cristo (Folha de S.Paulo)

12/08/2014 06h00

“Na hora de decidir por atirar ou não, muitas vezes a técnica não vai ajudar. O policial vai precisar de um sentido maior.”

A afirmação do coronel da PM Alexandre Terra ajuda a justificar, no vídeo abaixo, a existência dos PMs de Cristo.

A associação, de cunho religioso e atualmente presidida pelo policial, existe há 22 anos na corporação paulista e contrasta com a rotina muitas vezes violenta dos seus membros.

Na reportagem, de Anna Virginia Balloussier com fotografia de Rodrigo Machado e edição de Diego Arvate, o coronel exalta a busca do equilíbrio para superar “situações sobrenaturais” vividas por policiais que “ficaram possessos por demônios”.

Veja vídeo

The Anthropocene: Too Serious for Post-Modern Games (Immanence Blog)

August 18, 2014 by Adrian J Ivakhiv

The following is a guest post by Clive Hamilton, professor of public ethics at Charles Sturt University in Canberra, Australia. It continues the Immanence series “Debating the Anthropocene.” See herehere, and here for previous articles in the series. (And note that some lengthy comments have been added to the previous post by Jan Zalasiewicz, Kieran Suckling, and others.)

040325_hmed_iceberg_1130a.grid-6x2

 The Anthropocene: Too Serious for Post-Modern Games

by Clive Hamilton

In his post “Against the Anthropocene”, Kieran Suckling makes two main arguments. The first is that the choice of “Anthropocene” as the name for the new epoch breaks with stratigraphic tradition; he feels uncomfortable with a change in tradition, not least because he suspects the break reflects a hidden political objective. The second is that similar names have been invented for the era of industrialism in the past, names that have gone out of fashion, and the Anthropocene will go the same way.

Many scientists and social scientists have entered the debate over the Anthropocene. Each of them seems to want to impose their own disciplinary framework on it. Thus one respondent to Kieran’s post wrote that it is “difficult to get a handle on the term ‘Anthropocene’ because it means very different things to different people”. This is true, but it is true because most people have not bothered to read the half dozen basic papers on the Anthropocene by those who have defined it, and therefore do not know what they are talking about.

The problem is that those who want to colonise and redefine the Anthropocene completely miss the central point being made by Earth system scientists like Paul Crutzen, Will Steffen and Jan Zalasiewicz. I have elsewhere explained why those who have not made the gestalt shift to Earth system thinking cannot help but get the Anthropocene wrong. The Earth system scientists are saying that something radically new has occurred on planet Earth, something that can be detected from the late 18th-century and which is due predominantly to a serious disruption to the global carbon cycle. This disruption has set the Earth system on a new, unpredictable and dangerous trajectory.

Ecologists who have not made the leap to Earth system thinking have been the worst offenders. But a few social scientists and humanities people have been joining the fray, bringing their constructivist baggage. Kieran, I fear, is one of them.

In response to Jan Zalasiewicz’s comment that Paul Crutzen came up with the term at the right time, Kieran misunderstands him, asking: “Why was the time right? Is there something about western psychology and history that made this time right?” So he treats the development of a body of scientific evidence as if it were merely an emanation of social and psychological conditions. It’s a reading that has all of the epistemological and political faults of the “social construction of science”, an approach that today is deployed most effectively by climate science deniers.

Kieran’s disquisition on the historical use of terms like “the age of man” compounds this mistake. It suggests that he has missed the fundamental point – thefundamental point – about the new epoch: that the functioning of the Earth systemhas changed, and that it changed at the end of the 18th century; or, if we want to be absolutely certain, in the decades after the Second World War. I sense that Jan Z’s gentle reminder was lost, so let me stress it. He wrote: “The Anthropocene is not about being able to detect human influence in stratigraphy, but reflects a change in the Earth system” (my emphasis). The core of the problem, I think, is that most participants in the debate do not actually understand what is meant by “the Earth system”.

So whatever historical interest it may have (and personally I find it fascinating), the fact that Cuvier, Buffon, de Chardin and several others have deployed terms like “the age of man” has no bearing whatsoever on the current debate, which is about a physical transformation, a rupture, that has actually occurred. Arguing that it’s all been said before – “I can show that your claim to have come up with something decisively new is historically inaccurate” – is a standard rhetorical strategy known as deflation. But it carries the same danger we were warned of as children when our parents read us the story of the boy who cried wolf. Whatever historical precedent, and whatever environmental alarm bell may have been rung in the past, the wolf has arrived.

Deflationary moves that characterise the Anthropocene as merely the latest attempt by anthropocentric westerners to impose an “age of man” frame on the world – that it is a fad that will wane as all the others have – betray an essential failure to grasp what the Earth scientists are telling us is now happening in the Earth system. When the IPCC tells us we are heading for a doubling or, more likely, a trebling of CO2concentrations it is not a fad. When the world’s scientific academies warn we are heading into a world of 4°C warming, changing the conditions of life on the planet, they are not saying it because it’s fashionable. And if the Anthropocene is another example of western linguistic imperialism, changing the name will not exempt the poor and vulnerable of the South from its devastating effects.

No, I’m sorry, this is serious now. After all the attacks on climate science and the well-funded, systematic campaign to discredit climate scientists, people of good will have an absolute obligation not to play around with the science. The constructivist games of the 80s and 90s are an intellectual luxury we can no longer afford.

 

Let me now comment on Kieran’s argument that the Anthropocene is wrongly named because it deviates from naming tradition. He writes that epochs are never named for the causes of change but for the changed composition of the species present in each epoch, era or period. When we examine the helpful lists he provides linking eras, periods and epochs to their characteristic biota, the word that appears uniformly is “appear”. Eukaryotes appear, reptiles appear, fish appear, mammals appear, and so on.

When he calls for consistency in naming, then, we should name the Anthropocene not after the cause of the new epoch (techno-industrial anthropos) but after the new forms of life that have appeared. The problem is that no new forms of life have yet appeared. It seems very likely they will, but it would be impractical to wait 100,000 years before we knew what to name the latest epoch. By then all of the members of the International Commission on Stratigraphy will be dead (they who already in my imagination are like the wizened judges of the Court of Chancery hearing Jarndyce v Jarndyce in Bleak House).

So we are stuck with an anomaly; why this should cause anxiety, except to those wedded to tradition, I do not know. We are practical people; if we cannot apply the old principle to naming a manifestly new and important geological epoch then we must choose a new principle.

Kieran’s solution to the problem is to name the epoch after the radical homogenization of the planet’s species (along with the extinction of many). He suggests the “Homogenocene”. But here he only smuggles in a new criterion, replacing the appearance of new species with a change in the distribution of existing ones. If we were to accept Kieran’s argument then, as Jan points out, why not name the epoch after the overwhelmingly dominant feature of homogenisation, the spread of humans across the globe. According to Vaclav Smil, humans and their domestic animals now account for a breath-taking 97 per cent of the biomass of all terrestrial vertebrates. On Kieran’s own criterion, we would name the new epoch … the Anthropocene.

Finally, it will help if I tell the story of the naming of the Anthropocene, for an innocent reader of Kieran’s piece may draw the conclusion that there was some kind of secret meeting at which a group of western scientists committed to an anthropocentric worldview conspired to promote their ideology by choosing a name that embodies it. Kieran asks: “What belief system(s) drive the shift … to a name based on the power of one species, a species that happens to be us?”

The answer is more prosaic and goes like this. In 2000 Paul Crutzen was at a scientific meeting in Mexico. As the discussion progressed he became increasingly frustrated at the use of the term “Holocene” which he felt no longer described the state of the Earth system, which he knew had been irreversibly disrupted and damaged by human activity. Unable to contain his irritation he intervened, declaring to the meeting: “It’s not the Holocene, it’s … it’s … it’s … the Anthropocene.”

That was it. He just blurted it out; and it stuck. Paul Crutzen is an atmospheric chemist. Given his training it is no surprise that as his brain struggled for the right word it would come with one that linked the state of the Earth to the activities of humans, anthropos. If there had been a savvy sociologist sitting at the table, she might have said: “Wait a minute Paul. It’s not humans in general who got us into this mess, but western industrial ones. So let’s call it the Capitalocene or the Technocene.”

Who knows, perhaps that intervention would have changed the course of history right then. But it didn’t happen, and we have the term we are now debating. Crutzen and his various co-authors would agree with the savvy sociologist that it has been techno-industrialism with its origins in Europe that brought on the new epoch. They have argued persistently that the Anthropocene began with the growth of industries powered by fossil energy towards the end of the 18th-century and accelerated with the hyper-consumerism of the post-war decades.

The real adversaries here are not Crutzen et al. but those scientists, mostly ecologists who do not ‘get’ Earth system science, who are making all sorts of erroneous and confusing claims about the Anthropocene’s origins lying in the distant past, thousands of years before European industrialisation. If anyone is trying to displace responsibility for the mess we are in then they are the culprits. It is they who want to blend the Anthropocene into the Holocene and thereby make theanthropos of the Anthropocene a neutral, blameless, meaningless cause, so that the radical transformation that we now see is the result merely of humans doing what humans do, which nothing can change. No wonder political conservatives are drawn to the early Anthropocene hypothesis.

How to Talk About Climate Change So People Will Listen (The Atlantic)

SEPTEMBER 2014

Environmentalists warn us that apocalypse awaits. Economists tell us that minimal fixes will get us through. Here’s how we can move beyond the impasse. 

Josh Cochran

Not long ago, my newspaper informed me that glaciers in the western Antarctic, undermined by the warmer seas of a hotter world, were collapsing, and their disappearance “now appears to be unstoppable.” The melting of these great ice sheets would make seas rise by at least four feet—ultimately, possibly 12—more than enough to flood cities from New York to Tokyo to Mumbai. Because I am interested in science, I read the two journal articles that had inspired the story. How much time do we have, I wondered, before catastrophe hits?

One study, in Geophysical Research Letters, provided no guidance; the authors concluded only that the disappearing glaciers would “significantly contribute to sea level rise in decades to centuries to come.” But the other, in Science, offered more-precise estimates: during the next century, the oceans will surge by as much as a quarter of a millimeter a year. By 2100, that is, the calamity in Antarctica will have driven up sea levels by almost an inch. The process would get a bit faster, the researchers emphasized, “within centuries.”

How is one supposed to respond to this kind of news? On the one hand, the transformation of the Antarctic seems like an unfathomable disaster. On the other hand, the disaster will never affect me or anyone I know; nor, very probably, will it trouble my grandchildren. How much consideration do I owe the people it will affect, my 40-times-great-grandchildren, who, many climate researchers believe, will still be confronted by rising temperatures and seas? Americans don’t even save for their own retirement! How can we worry about such distant, hypothetical beings?

In our ergonomic chairs and acoustical-panel cubicles, we sit cozy as kings atop 300 years of flaming carbon.

Worse, confronting climate change requires swearing off something that has been an extraordinary boon to humankind: cheap energy from fossil fuels. In the 3,600 years between 1800B.C. and 1800 A.D., the economic historian Gregory Clark has calculated, there was “no sign of any improvement in material conditions” in Europe and Asia. Then came the Industrial Revolution. Driven by the explosive energy of coal, oil, and natural gas, it inaugurated an unprecedented three-century wave of prosperity. Artificial lighting, air-conditioning, and automobiles, all powered by fossil fuels, swaddle us in our giddy modernity. In our ergonomic chairs and acoustical-panel cubicles, we sit cozy as kings atop 300 years of flaming carbon.

In the best of times, this problem—given its apocalyptic stakes, bewildering scale, and vast potential cost—would be difficult to resolve. But we are not in the best of times. We are in a time of legislative paralysis. In an important step, the Obama administration announced in June its decision to cut power-plant emissions 30 percent by 2030. Otherwise, this country has seen strikingly little political action on climate change, despite three decades of increasingly high-pitched chatter by scientists, activists, economists, pundits, and legislators.

The chatter itself, I would argue, has done its share to stall progress. Rhetorical overreach, moral miscalculation, shouting at cross-purposes: this toxic blend is particularly evident when activists, who want to scare Americans into taking action, come up against economists, with their cool calculations of acceptable costs. Eco-advocates insist that only the radical transformation of society—the old order demolished, foundation to roof—can fend off the worst consequences of climate change. Economists argue for adapting to the most-likely consequences; cheerleaders for industrial capitalism, they propose quite different, much milder policies, and are ready to let nature take a bigger hit in the short and long terms alike. Both envelop themselves in the mantle of Science, emitting a fug of charts and graphs. (Actually, every side in the debate, including the minority who deny that humans can affect the climate at all, claims the backing of Science.) Bewildered and battered by the back-and-forth, the citizenry sits, for the most part, on its hands. For all the hot air expended on the subject, we still don’t know how to talk about climate change.

As an issue, climate change was unlucky: when nonspecialists first became aware of it, in the 1990s, environmental attitudes had already become tribal political markers. As the Yale historian Paul Sabin makes clear in The Bet, it wasn’t always this way. The votes for the 1970 Clean Air Act, for example, were 374–1 in the House, 73–0 in the Senate. Sabin’s book takes off from a single event: a bet between the ecologist Paul R. Ehrlich and the economist Julian Simon a decade later. Ehrlich’s The Population Bomb (1968), which decried humankind’s rising numbers, was a foundational text in the environmental movement. Simon’s Ultimate Resource (1981) was its antimatter equivalent: a celebration of population growth, it awakened opposition to the same movement.

Activist led by Bill McKibben, the founder of 350.org, protest the building of the Keystone XL pipeline at the White House, February 2013. (AP)

Ehrlich was moderately liberal in his politics but unrestrained in his rhetoric. The second sentence of The Population Bomb promised that “hundreds of millions of people” would starve to death within two decades, no matter what “crash programs” the world launched to feed them. A year later, Ehrlich gave even odds that “England will not exist in the year 2000.” In 1974, he told Congress that “a billion or more people” could starve in the 1980s “at the latest.” When the predictions didn’t pan out, he attacked his critics as “incompetent” and “ignorant,” “morons” and “idiots.”

Simon, who died in 1998, argued that “human resourcefulness and enterprise” will extricate us from our ecological dilemma. Moderately conservative in his politics, he was exuberantly uninhibited in his scorn for eco-alarmists. Humankind faces no serious environmental problems, he asserted. “All long-run trends point in exactly the opposite direction from the projections of the doomsayers.” (All? Really?) “There is no convincing economic reason why these trends toward a better life should not continue indefinitely.” Relishing his role as a spoiler, he gave speeches while wearing red plastic devil horns. Unsurprisingly, he attracted disagreement, to which he responded with as much bluster as Ehrlich. Critics, motivated by “blatant intellectual dishonesty” and indifference to the poor, were “corrupt,” their ideas “ignorant and wrongheaded.”

In 1980, the two men wagered $1,000 on the prices of five metals 10 years hence. If the prices rose, as Ehrlich predicted, it would imply that these resources were growing scarcer, as Homo sapiens plundered the planet. If the prices fell, this would be a sign that markets and human cleverness had made the metals relatively less scarce: progress was continuing. Prices dropped. Ehrlich paid up, insisting disingenuously that he had been “schnookered.”

Schnookered, no; unlucky, yes. In 2010, three Holy Cross economists simulated the bet for every decade from 1900 to 2007. Ehrlich would have won 61 percent of the time. The results, Sabin says, do not prove that these resources have grown scarcer. Rather, metal prices crashed after the First World War and spent most of a century struggling back to their 1918 levels. Ecological issues were almost irrelevant.

The bet demonstrated little about the environment but much about environmental politics. The American landscape first became a source of widespread anxiety at the beginning of the 20th century. Initially, the fretting came from conservatives, both the rural hunters who established the licensing system that brought back white-tailed deer from near-extinction and the Ivy League patricians who created the national parks. So ineradicable was the conservative taint that decades later, the left still scoffed at ecological issues as right-wing distractions. At the University of Michigan, the radical Students for a Democratic Society protested the first Earth Day, in 1970, as elitist flimflam meant to divert public attention from class struggle and the Vietnam War; the left-wing journalist I. F. Stone called the nationwide marches a “snow job.” By the 1980s, businesses had realized that environmental issues had a price tag. Increasingly, they balked. Reflexively, the anticorporate left pivoted; Earth Day, erstwhile snow job, became an opportunity to denounce capitalist greed.

Climate change is a perfect issue for symbolic battle, because it is as yet mostly invisible.

The result, as the Emory historian Patrick Allitt demonstrates in A Climate of Crisis, was a political back-and-forth that became ever less productive. Time and again, Allitt writes, activists and corporate executives railed against each other. Out of this clash emerged regulatory syntheses: rules for air, water, toxins. Often enough, businesspeople then discovered that following the new rules was less expensive than they had claimed it would be; environmentalists meanwhile found out that the problems were less dire than they had claimed.

 

Throughout the 1980s, for instance, activists charged that acid rain from midwestern power-plant emissions was destroying thousands of East Coast lakes. Utilities insisted that anti-pollution equipment would be hugely expensive and make homeowners’ electric bills balloon. One American Electric Power representative predicted that acid-rain control could lead to the “destruction of the Midwest economy.” A 1990 amendment to the Clean Air Act, backed by both the Republican administration and the Democratic Congress, set up a cap-and-trade mechanism that reduced acid rain at a fraction of the predicted cost; electric bills were barely affected. Today, most scientists have concluded that the effects of acid rain were overstated to begin with—fewer lakes were hurt than had been thought, and acid rain was not the only cause.

Rather than learning from this and other examples that, as Allitt puts it, “America’s environmental problems, though very real, were manageable,” each side stored up bitterness, like batteries taking on charge. The process that had led, however disagreeably, to successful environmental action in the 1970s and ’80s brought on political stasis in the ’90s. Environmental issues became ways for politicians to signal their clan identity to supporters. As symbols, the issues couldn’t be compromised. Standing up for your side telegraphed your commitment to take back America—either from tyrannical liberal elitism or right-wing greed and fecklessness. Nothing got done.

As an issue, climate change is perfect for symbolic battle, because it is as yet mostly invisible. Carbon dioxide, its main cause, is not emitted in billowing black clouds, like other pollutants; nor is it caustic, smelly, or poisonous. A side effect of modernity, it has for now a tiny practical impact on most people’s lives. To be sure, I remember winters as being colder in my childhood, but I also remember my home then as a vast castle and my parents as godlike beings.

In concrete terms, Americans encounter climate change mainly in the form of three graphs, staples of environmental articles. The first shows that atmospheric carbon dioxide has been steadily increasing. Almost nobody disputes this. The second graph shows rising global temperatures. This measurement is trickier: carbon dioxide is spread uniformly in the air, but temperatures are affected by a host of factors (clouds, rain, wind, altitude, the reflectivity of the ground) that differ greatly from place to place. Here the data are more subject to disagreement. A few critics argue that for the past 17 years warming has mostly stopped. Still, most scientists believe that in the past century the Earth’s average temperature has gone up by about 1.5 degrees Fahrenheit.

Rising temperatures per se are not the primary concern. What matters most is their future influence on other things: agricultural productivity, sea levels, storm frequency, infectious disease. As the philosopher Dale Jamieson points out in the unfortunately titled Reason in a Dark Time, most of these effects cannot be determined by traditional scientific experiments—white-coats in laboratories can’t melt a spare Arctic ice cap to see what happens. (Climate change has no lab rats.) Instead, thousands of researchers refine ever bigger and more complex mathematical models. The third graph typically shows the consequences such models predict, ranging from worrisome (mainly) to catastrophic (possibly).

Such charts are meaningful to the climatologists who make them. But for the typical citizen they are a muddle, too abstract—too much like 10th-grade homework—to be convincing, let alone to motivate action. In the history of our species, has any human heart ever been profoundly stirred by a graph? Some other approach, proselytizers have recognized, is needed.

To stoke concern, eco-campaigners like Bill McKibben still resort, Ehrlich-style, to waving a skeleton at the reader. Thus the first sentence of McKibben’sOil and Honey, a memoir of his climate activism, describes 2011–12, the period covered by his book, as “a time when the planet began to come apart.” Already visible “in almost every corner of the earth,” climate “chaos” is inducing “an endless chain of disasters that will turn civilization into a never-ending emergency response drill.”

Bill McKibben says we must “start producing a nation of careful, small-scale farmers … who can adapt to the crazed new world with care and grace.”

The only solution to our ecological woes, McKibben argues, is to live simpler, more local, less resource-intensive existences—something he believes is already occurring. “After a long era of getting big and distant,” he writes, “our economy, and maybe our culture, has started to make a halting turn toward the small and local.” Not only will this shift let us avoid the worst consequences of climate change, it will have the happy side effect of turning a lot of unpleasant multinational corporations to ash. As we “subside into a workable, even beautiful, civilization,” we will lead better lives. No longer hypnotized by the buzz and pop of consumer culture, narcotized couch potatoes will be transformed into robust, active citizens: spiritually engaged, connected to communities, appreciative of Earth’s abundance.

For McKibben, the engagement is full throttle: The Oil half of his memoir is about founding 350.org, a group that seeks to create a mass movement against climate change. (The 350 refers to the theoretical maximum safe level, in parts per million, of atmospheric carbon dioxide, a level we have already surpassed.) The Honey half is about buying 70 acres near his Vermont home to support an off-the-grid beekeeper named Kirk Webster, who is living out McKibben’s organic dream in a handcrafted, solar-powered cabin in the woods. Webster, McKibben believes, is the future. We must, he says, “start producing a nation of careful, small-scale farmers such as Kirk Webster, who can adapt to the crazed new world with care and grace, and who don’t do much more damage in the process.”

Poppycock, the French philosopher Pascal Bruckner in effect replies in The Fanaticism of the Apocalypse. A best-selling, telegenic public intellectual (a species that hardly exists in this country), Bruckner is mainly going after what he calls “ecologism,” of which McKibbenites are exemplars. At base, he says, ecologism seeks not to save nature but to purify humankind through self-flagellating asceticism.

To Bruckner, ecologism is both ethnocentric and counterproductive. Ethnocentric because eco-denunciations of capitalism simply give new, green garb to the long-standing Euro-American fear of losing dominance over the developing world (whose recent growth derives, irksomely, from fossil fuels). Counterproductive because ecologism induces indifference, or even hostility to environmental issues. In the quest to force humanity into a puritanical straitjacket of rural simplicity, ecologism employs what should be neutral, fact-based descriptions of a real-world problem (too much carbon dioxide raises temperatures) as bludgeons to compel people to accept modes of existence they would otherwise reject. Intuiting moral blackmail underlying the apparently objective charts and graphs, Bruckner argues, people react with suspicion, skepticism, and sighing apathy—the opposite of the reaction McKibbenites hope to evoke.

The ranchers and farmers in Tony Horwitz’s Boom, a deft and sometimes sobering e-book, suggest Bruckner may be on to something. Horwitz, possibly best known for his study of Civil War reenactors, Confederates in the Attic, travels along the proposed path of the Keystone XL, a controversial pipeline intended to take oil from Alberta’s tar-sands complex to refineries in Steele City, Nebraska—and the project McKibben has used as the rallying cry for 350.org. McKibben set off on his anti-Keystone crusade after the climatologist-provocateur James Hansen charged in 2011 that building the pipeline would be “game over” for the climate. If Keystone were built, Hansen later wrote, “civilization would be at risk.” Everyone Horwitz meets has heard this scenario. But nobody seems to have much appetite for giving up the perks of industrial civilization, Kirk Webster–style. “You want to go back to the Stone Age and use only wind, sun, and water?” one person asks. A truck driver in the tar-sands project tells Horwitz, “This industry is giving me a future, even if it’s a short one and we’re all about to toast together.” Given the scale of the forces involved, individual action seems futile. “It’s going to burn up anyhow at the end,” explains a Hutterite farmer, matter-of-factly. “The world will end in fire.”

 

Whereas McKibbenites see carbon dioxide as an emblem of a toxic way of life, economists like William Nordhaus of Yale tend to view it as simply a by-product of the good fortune brought by capitalism. Nordhaus, the president of the American Economic Association, has researched climate issues for four decades. His The Climate Casino has an even, unhurried tone; a classic Voice of Authority rumbles from the page. Our carbon-dioxide issues, he says, have a “simple answer,” one “firmly based in economic theory and history”:

The best approach is to use market mechanisms. And the single most important market mechanism that is missing today is a high price on CO2 emissions, or what is called “carbon prices” … The easiest way is simply to tax CO2 emissions: a “carbon tax” … The carbon price [from the tax] will be passed on to the consumer in the form of higher prices.

Nordhaus provides graphs (!) showing how a gradually increasing tax—or, possibly, a market in emissions permits—would slowly and steadily ratchet down global carbon-dioxide output. The problem, as he admits, is that the projected reduction “assumes full participation.” Translated from econo-speak, “full participation” means that the Earth’s rich and populous nations must simultaneously apply the tax. Brazil, China, France, India, Russia, the United States—all must move in concert, globally cooperating.

To say that a global carbon tax is a simple answer is like arguing that the simple answer to death is repealing the Second Law of Thermodynamics.

Alas, nothing like Nordhaus’s planetary carbon tax has ever been enacted. The sole precedent is the Montreal Protocol, the 1987 treaty banning substances that react with atmospheric ozone and reduce its ability to absorb the sun’s harmful ultraviolet radiation. Signed by every United Nations member and successfully updated 10 times, the protocol is a model of international eco-cooperation. But it involves outlawing chemicals in refrigerators and spray cans, not asking nations to revamp the base of their own prosperity. Nordhaus’s declaration that a global carbon tax is a simple answer is like arguing that the simple answer to death is repealing the Second Law of Thermodynamics.

Does climate change, as Nordhaus claims, truly slip into the silk glove of standard economic thought? The dispute is at the center of Jamieson’s Reason in a Dark Time. Parsing logic with the care of a raccoon washing a shiny stone, Jamieson maintains that economists’ discussions of climate change are almost as problematic as those of environmentalists and politicians, though for different reasons.

Remember how I was complaining that all discussions of climate change devolve into homework? Here, sadly, is proof. To critique economists’ claims, Jamieson must drag the reader through the mucky assumptions underlying cost-benefit analysis, a standard economic tool. In the case of climate change, the costs of cutting carbon dioxide are high. What are the benefits? If the level of carbon dioxide in the atmosphere rises only slightly above its current 400 parts per million, most climatologists believe, there is (roughly) a 90 percent chance that global temperatures will eventually rise between 3 and 8 degrees Fahrenheit, with the most likely jump being between 4 and 5 degrees. Nordhaus and most other economists conclude that humankind can slowly constrain this relatively modest rise in carbon without taking extraordinary, society-transforming measures, though neither decreasing the use of fossil fuels nor offsetting their emissions will be cheap or easy. But the same estimates show (again in rough terms) a 5 percent chance that letting carbon dioxide rise much above its current level would set off a domino-style reaction leading to global devastation. (No one pays much attention to the remaining 5 percent chance that the carbon rise would have very little effect on temperature.)

In our daily lives, we typically focus on the most likely result: I decide whether to jaywalk without considering the chance that I will trip in the street and get run over. But sometimes we focus on the extreme: I lock up my gun and hide the bullets in a separate place to minimize the chance that my kids will find and play with them. For climate change, should we focus on adapting to the mostprobable outcome or averting the most dangerous one? Cost-benefit analyses typically ignore the most-radical outcomes: they assume that society has agreed to accept the small but real risk of catastrophe—something environmentalists, to take one particularly vehement section of society, have by no means done.

On top of this, Jamieson argues, there is a second problem in the models economists use to discus climate change. Because the payoff from carbon-dioxide reduction will occur many decades from now, Nordhausian analysis suggests that we should do the bare minimum today, even if that means saddling our descendants with a warmer world. Doing the minimum is expensive enough already, economists say. Because people tomorrow will be richer than we are, as we are richer than our grandparents were, they will be better able to pay to clean up our emissions. Unfortunately, this is an ethically problematic stance. How can we weigh the interests of someone born in 2050 against those of someone born in 1950? In this kind of trade-off between generations, Jamieson argues, “there is no plausible value” for how much we owe the future.

Given their moral problems, he concludes, economic models are much less useful as guides than their proponents believe. For all their ostensible practicality—for all their attempts to skirt the paralysis-inducing specter of the apocalypse—economists, too, don’t have a good way to talk about climate change.

Years ago, a colleague and I spoke with the physicist Richard Feynman, later a national symbol of puckish wit and brash truth-telling. At the frontiers of science, he told us, hosts of unclear, mutually contradictory ideas are always swarming about. Researchers can never agree on how to proceed or even on what is important. In these circumstances, Feynman said, he always tried to figure out what would take him forward no matter which theory eventually turned out to be correct. In this agnostic spirit, let’s assume that rising carbon-dioxide levels will become a problem of some magnitude at some time and that we will want to do something practical about it. Is there something we should do, no matter what technical arcanae underlie the cost-benefit analyses, no matter when we guess the bad effects from climate change will kick in, no matter how we value future generations, no matter what we think of global capitalism? Indeed, is there some course of action that makes sense even if we think that climate change isn’t much of a problem at all?

As my high-school math teacher used to say, let’s do the numbers. Roughly three-quarters of the world’s carbon-dioxide emissions come from burning fossil fuels, and roughly three-quarters of that comes from just two sources: coal in its various forms, and oil in its various forms, including gasoline. Different studies produce slightly different estimates, but they all agree that coal is responsible for more carbon dioxide than oil is—about 25 percent more. That number is likely to increase, because coal consumption is growing much faster than oil consumption.

Geo-engineering involves tinkering with planetary systems we only partially understand. But planet-hacking does have an overarching advantage: it’s cheap.​

Although coal and oil are both fossil fuels, they are used differently. In this country, for example, the great majority of oil—about three-quarters—is consumed by individuals, as they heat their homes and drive their cars. Almost all U.S. coal (93 percent) is burned not in homes but by electric-power plants; the rest is mainly used by industry, notably for making cement and steel. Cutting oil use, in other words, requires huge numbers of people to change their houses and automobiles—the United States alone has 254 million vehicles on the road. Reducing U.S. coal emissions, by contrast, means regulating 557 big power plants and 227 steel and cement factories. (Surprisingly, many smaller coal plants exist, some at hospitals and schools, but their contributions are negligible.) I’ve been whacking poor old Nordhaus for his ideas about who should pay for climate change, but he does make this point, and precisely: “The most cost-effective way to reduce CO2 emissions is to reduce the use of coal first and most sharply.” Note, too, that this policy comes with a public-health bonus: reining in coal pollution could ultimately avoid as many as 6,600 premature deaths and 150,000 children’s asthma attacks per year in the United States alone.

 

Different nations have different arrangements, but almost everywhere the basic point holds true: a relatively small number of industrial coal plants—perhaps 7,000 worldwide—put out an amazingly large amount of carbon dioxide, more than 40 percent of the global total. And that figure is rising; last year, coal’s share of energy production hit a 44-year high, because Asian nations are building coal plants at a fantastic rate (and, possibly, because demand for coal-fired electricity will soar as electric cars become popular). No matter what your views about the impact and import of climate change, you are primarily talking about coal. To my mind, at least, retrofitting 7,000 industrial facilities, however mind-boggling, is less mind-boggling than, say, transforming the United States into “a nation of careful, small-scale farmers” or enacting a global carbon tax with “full participation.” It is, at least, imaginable.

The focus of the Obama administration on reducing coal emissions suggests that it has followed this logic. If the pattern of the late 20th century still held, industry would reply with exaggerated estimates of the cost, and compromises would be worked out. But because the environment has become a proxy for a tribal battle, an exercise in power politics will surely ensue. I’ve given McKibben grief for his apocalyptic rhetoric, but he’s exactly correct that without a push from a popular movement—without something like 350.org—meaningful attempts to cut back coal emissions are much less likely to yield results.

Regrettably, 350.org has fixated on the Keystone pipeline, which the Congressional Research Service has calculated would raise this nation’s annual output of greenhouse gases by 0.05 to 0.3 percent. (James Hansen, in arguing that the pipeline would be “game over” for the climate, erroneously assumed that all of the tar-sands oil could be burned rapidly, instead of dribbling out in relatively small portions year by year, over decades.) None of this is to say that exploiting tar sands is a good idea, especially given the apparent violation of native treaties in Canada. But a popular movement focused on symbolic goals will have little ability to win practical battles in Washington.

If politics fail, the only recourse, says David Keith, a Harvard professor of public policy and applied physics, will be a technical fix. And soon—by mid-century. Keith is talking about geo-engineering: fighting climate change with more climate change. A Case for Climate Engineering is a short book arguing that we should study spraying the stratosphere with tiny glittering droplets of sulfuric acid that bounce sunlight back into space, reducing the Earth’s temperature. Physically speaking, the notion is feasible. The 1991 eruption of Mount Pinatubo, in the Philippines, created huge amounts of airborne sulfuric acid—and lowered the Earth’s average temperature that year by about 1 degree.

Keith is candid about the drawbacks. Not only does geo-engineering involve tinkering with planetary systems we only partially understand, it can’t cancel out, even in theory, greenhouse problems like altered rainfall patterns and increased ocean acidity. The sulfur would soon fall to the Earth, a toxic rain of pollution that could kill thousands of people every year. The carbon dioxide that was already in the air would remain. To continue to slow warming, sulfur would have to be lofted anew every year. Still, Keith points out, without this relatively crude repair, unimpeded climate change could be yet more deadly.

Planet-hacking does have an overarching advantage: it’s cheap. “The cost of geoengineering the entire planet for a decade,” Keith writes, “could be less than the $6 billion the Italian government is spending on dikes and movable barriers to protect a single city, Venice, from climate change–related sea level rise.”

That advantage is also dangerous, he points out. A single country could geo-engineer the whole planet by itself. Or one country’s geo-engineering could set off conflicts with another country—a Chinese program to increase its monsoon might reduce India’s monsoon. “Both are nuclear weapons states,” Keith reminds us. According to Forbes, the world has 1,645 billionaires, several hundred of them in nations threatened by climate change. If their businesses or homes were at risk, any one of them could single-handedly pay for a course of geo-engineering. Is anyone certain none of these people would pull the trigger?

Few experts think that relying on geo-engineering would be a good idea. But no one knows how soon reality will trump ideology, and so we may finally have hit on a useful form of alarmism. One of the virtues of Keith’s succinct, scary book is to convince the reader that unless we find a way to talk about climate change, planes full of sulfuric acid will soon be on the runway.

Our Microbiome May Be Looking Out for Itself (New York Times)

A highly magnified view of Enterococcus faecalis, a bacterium that lives in the human gut. Microbes may affect our cravings, new research suggests.CreditCenters for Disease Control and Prevention

Your body is home to about 100 trillion bacteria and other microbes, collectively known as your microbiome. Naturalists first became aware of our invisible lodgers in the 1600s, but it wasn’t until the past few years that we’ve become really familiar with them.

This recent research has given the microbiome a cuddly kind of fame. We’ve come to appreciate how beneficial our microbes are — breaking down our food, fighting off infections and nurturing our immune system. It’s a lovely, invisible garden we should be tending for our own well-being.

But in the journal Bioessays, a team of scientists has raised a creepier possibility. Perhaps our menagerie of germs is also influencing our behavior in order to advance its own evolutionary success — giving us cravings for certain foods, for example.

Maybe the microbiome is our puppet master.

“One of the ways we started thinking about this was in a crime-novel perspective,” said Carlo C. Maley, an evolutionary biologist at the University of California, San Francisco, and a co-author of the new paper. “What are the means, motives and opportunity for the microbes to manipulate us? They have all three.”

The idea that a simple organism could control a complex animal may sound like science fiction. In fact, there are many well-documented examples of parasites controlling their hosts.

Some species of fungi, for example, infiltrate the brains of ants and coax them to climb plants and clamp onto the underside of leaves. The fungi then sprout out of the ants and send spores showering onto uninfected ants below.

How parasites control their hosts remains mysterious. But it looks as if they release molecules that directly or indirectly can influence their brains.

Our microbiome has the biochemical potential to do the same thing. In our guts, bacteria make some of the same chemicals that our neurons use to communicate with one another, such as dopamine and serotonin. And the microbes can deliver these neurological molecules to the dense web of nerve endings that line the gastrointestinal tract.

A number of recent studies have shown that gut bacteria can use these signals to alter the biochemistry of the brain. Compared with ordinary mice, those raised free of germs behave differently in a number of ways. They are more anxious, for example, and have impaired memory.

Adding certain species of bacteria to a normal mouse’s microbiome can reveal other ways in which they can influence behavior. Some bacteria lower stress levels in the mouse. When scientists sever the nerve relaying signals from the gut to the brain, this stress-reducing effect disappears.

Some experiments suggest that bacteria also can influence the way their hosts eat. Germ-free mice develop more receptors for sweet flavors in their intestines, for example. They also prefer to drink sweeter drinks than normal mice do.

Scientists have also found that bacteria can alter levels of hormones that govern appetite in mice.

Dr. Maley and his colleagues argue that our eating habits create a strong motive for microbes to manipulate us. “From the microbe’s perspective, what we eat is a matter of life and death,” Dr. Maley said.

Different species of microbes thrive on different kinds of food. If they can prompt us to eat more of the food they depend on, they can multiply.

Microbial manipulations might fill in some of the puzzling holes in our understandings about food cravings, Dr. Maley said. Scientists have tried to explain food cravings as the body’s way to build up a supply of nutrients after deprivation, or as addictions, much like those for drugs like tobacco and cocaine.

But both explanations fall short. Take chocolate: Many people crave it fiercely, but it isn’t an essential nutrient. And chocolate doesn’t drive people to increase their dose to get the same high. “You don’t need more chocolate at every sitting to enjoy it,” Dr. Maley said.

Perhaps, he suggests, the certain kinds of bacteria that thrive on chocolate are coaxing us to feed them.

John F. Cryan, a neuroscientist at University College Cork in Ireland who was not involved in the new study, suggested that microbes might also manipulate us in ways that benefited both them and us. “It’s probably not a simple parasitic scenario,” he said.

Research by Dr. Cryan and others suggests that a healthy microbiome helps mammals develop socially. Germ-free mice, for example, tend to avoid contact with other mice.

That social bonding is good for the mammals. But it may also be good for the bacteria.

“When mammals are in social groups, they’re more likely to pass on microbes from one to the other,” Dr. Cryan said.

“I think it’s a very interesting and compelling idea,” said Rob Knight, a microbiologist at the University of Colorado, who was also not involved in the new study.

If microbes do in fact manipulate us, Dr. Knight said, we might be able to manipulate them for our own benefit — for example, by eating yogurt laced with bacteria that would make us crave healthy foods.

“It would obviously be of tremendous practical importance,” Dr. Knight said. But he warned that research on the microbiome’s effects on behavior was “still in its early stages.”

The most important thing to do now, Dr. Knight and other scientists said, was to run experiments to see if microbes really are manipulating us.

Mark Lyte, a microbiologist at the Texas Tech University Health Sciences Center who pioneered this line of research in the 1990s, is now conducting some of those experiments. He’s investigating whether particular species of bacteria can change the preferences mice have for certain foods.

“This is not a for-sure thing,” Dr. Lyte said. “It needs scientific, hard-core demonstration.”

Nasa-funded study warns of ‘collapse of civilisation’ in coming decades (Independent)

‘Business as usual’ approach of economic elite will lead society to disaster, scientists warn

ADAM WITHNALL

Sunday 16 March 2014

Modern civilisation is heading for collapse within a matter of decades because of growing economic instability and pressure on the planet’s resources, according to a scientific study funded by Nasa.

Using theoretical models to predict what will happen to the industrialised world over the course of the next century or so, mathematicians found that even with conservative estimates things started to go very badly, very quickly.

Referring to the past collapses of often very sophisticated civilisations – the Roman, Han and Gupta Empires for example – the study noted that the elite of society have often pushed for a “business as usual” approach to warnings of disaster until it is too late.

In the report based on his “Human And Nature Dynamical” (Handy) model, the applied mathematician Safa Motesharri wrote: “the process of rise-and-collapse is actually a recurrent cycle found throughout history”.

His research, carried out with the help of a team of natural and social scientists and with funding from Nasa’s Goddard Space Flight Center, has been accepted for publication in the Ecological Economics journal, the Guardian reported.

Motesharri explored the factors which could lead to the collapse of civilisation, from population growth to climate change, and found that when these converge they can cause society to break down because of the “stretching of resources” and “the economic stratification of society into ‘Elites’ and ‘Masses’”.

Using his Handy model to assess a scenario closely resembling the current state of the world, Motesharri found that civilisation “appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among the Masses that eventually causes the collapse of society”.

The report stressed, however, that the worst-case scenario of collapse is not inevitable, and called on action now from the so-called real world “Elites” to restore economic balance.

“Collapse can be avoided and population can reach equilibrium if the per capita rate of depletion of nature is reduced to a sustainable level, and if resources are distributed in a reasonably equitable fashion,” the scientists said.

This is not the first time scientists have tried to warn us of potentially impending global disaster. Last year it emerged that Stephen Hawking and a team of Britain’s finest minds are drawing up a “doomsday list” of the catastrophic low-risk (but high-impact) events that could devastate the world.

Is empathy in humans and apes actually different? ‘Yawn contagion’ effect studied (Science Daily)

Date: August 12, 2014

Source: PeerJ

Summary: Whether or not humans are the only empathic beings is still under debate. In a new study, researchers directly compared the ‘yawn contagion’ effect between humans and bonobos — our closest evolutionary cousins. By doing so they were able to directly compare the empathic abilities of ourselves with another species, and found that a close relationship between individuals is more important to their empathic response than the fact that individuals might be from the same species.


Scientists have found that differences in levels of emotional contagion between humans and bonobos are attributable to the quality of relationships shared by individuals. Credit: Elisa Demuru
 

Whether or not humans are the only empathic beings is still under debate. In a new study, researchers directly compared the ‘yawn contagion’ effect between humans and bonobos (our closest evolutionary cousins). By doing so they were able to directly compare the empathic abilities of ourselves with another species, and found that a close relationship between individuals is more important to their empathic response than the fact that individuals might be from the same species.

The ability to experience others’ emotions is hard to quantify in any species, and, as a result, it is difficult to measure empathy in an objective way. The transmission of a feeling from one individual to another, something known as ‘emotional contagion,’ is the most basic form of empathy. Feelings are disclosed by facial expressions (for example sorrow, pain, happiness or tiredness), and these feelings can travel from an “emitting face” to a “receiving face.” Upon receipt, the mirroring of facial expressions evokes in the receiver an emotion similar to the emotion experienced by the sender.

Yawn contagion is one of the most pervasive and apparently trivial forms of emotional contagion. Who hasn’t been infected at least once by another person’s yawn (especially over dinner)? Humans and bonobos are the only two species in which it has been demonstrated that yawn contagion follows an empathic trend, being more frequent between individuals who share a strong emotional bond, such as friends, kin, and mates. Because of this similarity, researchers sought to directly compare the two species. Over the course of five years, they observed both humans and bonobos during their everyday activities and gathered data on yawn contagion by applying the same ethological approach and operational definitions. The results of their research are published today in the peer-reviewed journal PeerJ.

Two features of yawn contagion were compared: how many times the individuals responded to others’ yawns and how quickly. Intriguingly, when the yawner and the responder were not friends or kin, bonobos responded to others’ yawns just as frequently and promptly as humans did. This means that the assumption that emotional contagion is more prominent in humans than in other species is not necessarily the case.

However, humans did respond more frequently and more promptly than bonobos when friends and kin were involved, probably because strong relationships between humans are built upon complex and sophisticated emotional foundations linked to cognition, memory, and memories. In this case, the positive feedback linking emotional affinity and the mirroring process seems to spin faster in humans than in bonobos. In humans, such over-activation may explain the potentiated yawning response and also other kinds of unconscious mimicry response, such as happy, pained, or angry facial expressions.

In conclusion, this study suggests that differences in levels of emotional contagion between humans and bonobos are attributable to the quality of relationships shared by individuals. When the complexity of social bonds, typical of humans, is not in play,Homo sapiens climb down the tree of empathy to go back to the understory which we share with our ape cousins.


Journal Reference:

  1. Elisabetta Palagi, Ivan Norscia, Elisa Demuru. Yawn contagion in humans and bonobos: emotional affinity matters more than species. PeerJ, 2014; 2: e519 DOI: 10.7717/peerj.519