Arquivo mensal: outubro 2013

Moral in the Morning, but Dishonest in the Afternoon (Science Daily)

Oct. 30, 2013 — Our ability to exhibit self-control to avoid cheating or lying is significantly reduced over the course of a day, making us more likely to be dishonest in the afternoon than in the morning, according to findings published in Psychological Science, a journal of the Association for Psychological Science.

Our ability to exhibit self-control to avoid cheating or lying is significantly reduced over the course of a day, making us more likely to be dishonest in the afternoon than in the morning, according to new research. (Credit: © Mark Poprocki / Fotolia)

“As ethics researchers, we had been running experiments examining various unethical behaviors, such as lying, stealing, and cheating,” researchers Maryam Kouchaki of Harvard University and Isaac Smith of the University of Utah’s David Eccles School of Business explain. “We noticed that experiments conducted in the morning seemed to systematically result in lower instances of unethical behavior.”

This led the researchers to wonder: Is it easier to resist opportunities to lie, cheat, steal, and engage in other unethical behavior in the morning than in the afternoon?

Knowing that self-control can be depleted from a lack of rest and from making repeated decisions, Kouchacki and Smith wanted to examine whether normal activities during the day would be enough to deplete self-control and increase dishonest behavior.

In two experiments, college-age participants were shown various patterns of dots on a computer. For each pattern, they were asked to identify whether more dots were displayed on the left or right side of the screen. Importantly, participants were not given money for getting correct answers, but were instead given money based on which side of the screen they determined had more dots; they were paid 10 times the amount for selecting the right over the left. Participants therefore had a financial incentive to select the right, even if there were unmistakably more dots on the left, which would be a case of clear cheating.

In line with the hypothesis, participants tested between 8:00 am and 12:00 pm were less likely to cheat than those tested between 12:00 pm and 6:00pm — a phenomenon the researchers call the “morning morality effect.”

They also tested participants’ moral awareness in both the morning and afternoon. After presenting them with word fragments such as “_ _RAL” and “E_ _ _ C_ _” the morning participants were more likely to form the words “moral” and “ethical,” whereas the afternoon participants tended to form the words “coral” and “effects,” lending further support to the morning morality effect.

The researchers found the same pattern of results when they tested a sample of online participants from across the United States. Participants were more likely to send a dishonest message to a virtual partner or to report having solved an unsolvable number-matching problem in the afternoon, compared to the morning.

They also discovered that the extent to which people behave unethically without feeling guilt or distress — known as moral disengagement — made a difference in how strong the morning morality effect was. Those participants with a higher propensity to morally disengage were likely to cheat in both the morning and the afternoon. But people who had a lower propensity to morally disengage — those who might be expected to be more ethical in general — were honest in the morning, but less so in the afternoon.

“Unfortunately, the most honest people, such as those less likely to morally disengage, may be the most susceptible to the negative consequences associated with the morning morality effect,” the researchers write. “Our findings suggest that mere time of day can lead to a systematic failure of good people to act morally.”

Kouchacki, a post-doctoral research fellow at Harvard University’s Edmond J. Safra Center for Ethics, completed her doctoral studies at the University of Utah, where Smith is a current doctoral student. They note that their research results could have implications for organizations or businesses trying to reduce unethical behavior.

“For instance, organizations may need to be more vigilant about combating the unethical behavior of customers or employees in the afternoon than in the morning,” the researchers explain. “Whether you are personally trying to manage your own temptations, or you are a parent, teacher, or leader worried about the unethical behavior of others, our research suggests that it can be important to take something as seemingly mundane as the time of day into account.”

Journal Reference:

  1. M. Kouchaki, I. H. Smith. The Morning Morality Effect: The Influence of Time of Day on Unethical BehaviorPsychological Science, 2013; DOI:10.1177/0956797613498099
Anúncios

Manifesto da Sociedade Brasileira de Bioquímica e Biologia Molecular

JC e-mail 4846, de 31 de outubro de 2013

Texto assinado pelo presidente da SBBq, Moacir Wajner, manifesta repúdio à invasão do Instituto Royal

A Sociedade Brasileira de Bioquímica e Biologia Molecular vem a público manifestar seu repúdio aos atos terroristas/vandalismo de depredação de mais uma unidade dedicada à investigação científica, desta vez o Instituto Royal em São Roque, São Paulo. O Instituto Royal é instituição dedicada à investigação para confirmação da ausência de efeitos adversos tanto de medicamentos como também de substâncias cujo alto potencial de uso como novo medicamento já tenha sido avaliado por estudos prévios.

Este tipo de ação prejudica, mais do que a instituição atingida diretamente, a toda Sociedade Brasileira que fica debilitada em sua capacidade de fazer avançar a ciência e de desenvolver medicamentos próprios e também impedida de explorar o pleno potencial de sua biodiversidade para elevar o nível da sáude humana e dos animais. Fica também comprometida em sua independência e autonomia para controlar plenamente a pertinência e qualidade de medicamentos trazidos de outros países, ficando o país a mercê de exploradores internacionais e segurança da saúda da população fragilizada. Atitudes como esta contribuem para levar o Brasil a uma posição subalterna perante empresas internacionais e facilitar a exploração do povo brasileiro.

A Sociedade Brasileira de Bioquímica e Biologia Molecular também alerta a Sociedade Brasileira para não se deixar iludir pelos interesses dos vendedores da ilusão de que a pesquisa biomédica e o desenvolvimento e controle de medicamentos possa ser feito sem a utilização de ferramentas que requerem o emprego de animais de laboratório. Embora o grande avanço no estabelecimento de metodologias que substituem o uso desses animais, essas tecnologias são ainda aplicáveis a apenas um número ainda restrito de situações.

No atual estágio do desenvolvimento científico, não há conhecimento que permita prever que, sequer que a médio prazo, estudos empregando animais de laboratório possam ser totalmente eliminados. Inclusive, até mesmo o desenvolvimento desses métodos substitutivos requer o emprego de animais de laboratório. Atualmente, no Brasil, da mesma maneira que nos países mais desenvolvidos do mundo, o emprego de animais em qualquer tipo de experimento científico ou de controle da qualidade de medicamentos é regulado por lei específica, rigidamente observada, que exige o exame rigoroso e aprovação por parte de Comissões de Ética no Uso de Animais dos protocolos a que cada animal, individualmente, será submetido e também da comprovação de que não há método alternativo ou possibilidade de diminuir o número de animais necessários.

(Moacir Wajner, presidente da SBBq)

De ratos e cães (Folha de S. Paulo)

JC e-mail 4845, de 30 de outubro de 2013

Por Hélio Schwartsman

“O coração tem suas razões que a razão desconhece”, escreveu Pascal. O pensamento do filósofo se aplica bem aos paulistanos e seu amor pelos animais.

Segundo o Datafolha, 66% dos entrevistados se opõem ao uso de cães em pesquisas científicas. O índice baixa para 59% quando as cobaias são macacos, 57% caso sejam coelhos e apenas 29% se forem ratos.

Esses resultados, embora não surpreendentes, contrastam com o discurso dos ativistas, para os quais infligir sofrimento a bichos constitui um caso de especismo, delito moral que os militantes mais radicais equiparam ao racismo e ao escravagismo.

Em termos puramente filosóficos, esse é um raciocínio consistente, se aceitarmos as premissas consequencialistas de pensadores como Peter Singer, para o qual todos os seres sencientes são dignos de igual consideração. Se há uma hierarquia entre eles, ela é dada pela capacidade de sentir dor e prazer de cada espécie e indivíduo. Um ser humano vale mais que uma lesma; o problema é que os mamíferos, em geral, estão todos mais ou menos no mesmo plano.

Sob essa chave interpretativa, proteger cães em detrimento dos ratos constituiria especismo. Seria o equivalente de, na escravidão, defender a libertação dos nagôs e jejes, mas não dos hauçás e axantis, para citar alguns dos grupos étnicos entre os quais o Brasil fez mais vítimas.

O que a pesquisa Datafolha mostra, no fim das contas, é que as pessoas definitivamente não pensam por meio de categorias filosóficas.

Ao rejeitar a lógica consequencialista com base em emoções, o paulistano revela a principal dificuldade dessa matriz ética, que é exigir um igualitarismo tão forte que se torna desumano. Um consequencialista consequente, afinal, precisaria atribuir ao próprio filho o mesmo valor que dá ao filho de um desconhecido.

Não importa o que digam Singer e a filosofia, nos corações dos paulistanos um cão vale mais do que um rato.

http://www1.folha.uol.com.br/fsp/opiniao/136412-de-ratos-e-caes.shtml

Admirável mundo novo animal (Canal Ibase)

29/10/2013

Renzo Taddei

Colunista do Canal Ibase

Se avaliada pela repercussão que obteve na imprensa, a libertação dos 178 beagles do Instituto Royal foi um marco histórico. Nem na época do debate sobre a regulamentação do uso de células-tronco tanta gente graduada veio a público defender suas práticas profissionais. O tema está na capa das principais revistas semanais do país. A análise dos argumentos apresentados na defesa do uso de animais como cobaias de laboratório é, no entanto, desanimadora. E o é porque expõe o quanto nossos cientistas estão despreparados para avaliar, de forma ampla, as implicações éticas e morais do que fazem.

Vejamos: no debate aprendemos que há pesquisas para as quais as alternativas ao uso de animais não são adequadas. Aprendemos que muitas das doenças que são hoje de fácil tratamento não o seriam sem os testes feitos em animais; desta forma, muitas vidas humanas foram salvas. (Exemplificando como a razão pode sucumbir à emoção – até mesmo entre os mais aguerridos racionalistas -, um pesquisador da Fiocruz chegou ao desatino de afirmar que os “animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta”). Adicionalmente, o fato de cientistas importantes do passado, como Albert Sabin, Carlos Chagas ou Louis Pasteur, terem usado animais como cobaias de laboratório em suas pesquisas mostra que os cientistas, por sua contribuição à humanidade, não podem ser tratados como criminosos. Ainda pior que isso tudo, se o Brasil proibir testes com animais, a ciência brasileira perderá autonomia e competitividade, porque dependerá de resultados de pesquisas feitas em outros países para o seu avanço.

Além do mais, há que se levar o animal em consideração: é consenso entre cientistas de que os animais de laboratório não devem sofrer. Providências foram tomadas nesse sentido, como a criação do Conselho Nacional de Controle de Experimentação Animal, e da obrigatoriedade das instituições terem cada uma sua Comissão de Ética no Uso de Animais, com assento para representante de sociedades protetoras de animais legalmente constituídas. E, finalmente, os “próprios animais” são beneficiados, em razão de como as experiências de laboratório supostamente contribuem com o desenvolvimento da ciência veterinária.

De forma geral, o que temos aí resumido é o seguinte: os animais são coisas, e devem ser usados como tais; ou os animais não são coisas, mas infelizmente devem ser usados como tais. Há algo maior que se impõe (e sobre a qual falarei mais adiante), de forma determinante, de modo que se os animais são ou não são coisas, isso é um detalhe menor, que os cientistas logo aprendem a desprezar em seu treinamento profissional.

Foto: Ruth Elison/Flickr

A ideia de que os animais são coisas é antiga: Aristóteles, em seu livro Política, escrito há dois mil e trezentos anos, afirmou que os animais não são capazes de uso da linguagem e, por essa razão, não são capazes de uma existência ética. Sendo assim, conclui o filósofo, os animais foram criados para servir os humanos. Ideia semelhante está no Gênesis bíblico: “E disse Deus: Façamos o homem à nossa imagem, conforme a nossa semelhança; e domine sobre os peixes do mar, e sobre as aves dos céus, e sobre o gado, e sobre toda a terra, e sobre todo o réptil que se move sobre a terra” (Gênesis 1:26). Santo Agostinho e São Tomás de Aquino reafirmam a desconexão entre os animais e Deus. (São Francisco é, na história cristã, claramente um ponto fora da curva). A ideia chegou aos nossos dias praticamente intacta. O Catecismo Católico afirma, em seu parágrafo 2415, que “Os animais, tal como as plantas e os seres inanimados, são naturalmente destinados ao bem comum da humanidade, passada, presente e futura”. A ciência renascentista, através de Descartes e outros autores, fundou o humanismo que a caracteriza sobre essa distinção entre humanos e animais, exacerbando-a: o animal (supostamente) irracional passa a ser entendido como a antítese do humano (supostamente) racional. O tratamento de animais como coisas pela ciência contemporânea tem, desta forma, raízes históricas antigas.

Ocorre, no entanto, que essa ideia se contrapõe à existência cotidiana da maioria da humanidade, em todas as épocas. Em sociedades e culturas não-ocidentais, é comum que se atribua alguma forma de consciência e personalidade “humana” aos animais. Nas sociedades ocidentais, quem tem animal de estimação sabe que estes têm muito mais do que a simples capacidade de sentir dor: são capazes de fazer planos; de interagir entre si e com humanos em tarefas complexas, tomando decisões autônomas; integram-se na ecologia emocional das famílias humanas de forma significativa, construindo maneiras inteligentes de comunicar suas emoções. (Isso sem mencionar como animais humanizados são onipresentes em nosso imaginário cultural, dos desenhos animados infantis aos símbolos de times de futebol, de personagens do folclore popular a blockbusters hollywoodianos). De fato, o contraste entre essa percepção cotidiana e o que sugerem os pensamentos teológico e teórico mencionados acima faz parecer que há racionalização em excesso em tais argumentos. E onde há racionalização demais, ao invés de uma descrição do mundo, o mais provável é que haja uma tentativa de controle da realidade. Ou seja, trata-se mais de um discurso político, que tenta estabilizar relações desiguais de poder, do que qualquer outra coisa (nada de novo, aqui, para as ciências sociais ou para a filosofia da ciência).

É da própria atividade científica, no entanto, que vêm as evidências mais contundentes de que os animais são muito mais do que seres sencientes. No dia 7 de julho de 2012, um grupo de neurocientistas, neurofarmacologistas, neurofisiologistas, neuroanatomistas e cientistas da neurocomputação, reunidos na Universidade de Cambridge, produziu o documento intitulado Manifesto de Cambridge sobre a Consciência, onde se afirma o seguinte: “a ausência de neocortex não parece impedir um organismo de experimentar estados afetivos. Evidências convergentes indicam que animais não-humanos têm os substratos neuroanatômicos, neuroquímicos e neurofisiológicos necessários para a geração de estados conscientes, aliados à capacidade de exibir comportamento intencional. Consequentemente, as evidências indicam que os humanos não são únicos em possuir substratos neurológicos que geram consciência. Animais não-humanos, incluindo todos os mamíferos e aves, e muitas outras criaturas, como os polvos, também possuem tais substratos neurológicos” (tradução livre). O manifesto foi assinado em jantar que contou com a presença de Stephen Hawking. Phillip Low, um dos neurocientistas que redigiu o manifesto, disse em entrevista à revista Veja (edição 2278, 18 jul. 2012): “É uma verdade inconveniente: sempre foi fácil afirmar que animais não têm consciência. Agora, temos um grupo de neurocientistas respeitados que estudam o fenômeno da consciência, o comportamento dos animais, a rede neural, a anatomia e a genética do cérebro. Não é mais possível dizer que não sabíamos”.

Outro grupo de pesquisas com resultados problemáticos para a manutenção de mamíferos em laboratórios vem das ciências que estudam a vida social dos animais, em seus ambientes selvagens. Animais são seres sociais; alguns, como os estudos em primatologia nos mostram, têm sua vida social pautada por dinâmicas políticas complexas, onde os indivíduos não apenas entendem suas relações de parentesco de forma sofisticada, mas também ocupam postos específicos em hierarquias sociais que podem ter quatro níveis de diferenciação. Estudos da Universidade de Princeton  com babuínos mostraram que fêmeas são capazes de induzir uma ruptura política no bando, o que resulta na formação de um novo grupo social. Há muitos outros animais que vivem em sociedades hierárquicas complexas, como os elefantes, por exemplo. Cães, gatos, coelhos e ratos são também, naturalmente, animais sociais, ainda que a complexidade de seus grupos não seja equiparável ao que se vê entre babuínos e elefantes.

Além disso tudo, está amplamente documentado que muitos primatas são capazes de inventar soluções tecnológicas para seus problemas cotidianos – criando ferramentas para quebrar cascas de sementes, por exemplo – e de transmitir o que foi inventado aos demais membros dos grupos; inclusive aos filhotes. Tecnicamente, isso significa que possuem cultura, isto é, vida simbólica. As baleias mudam o “estilo” de seu canto de um ano para o outro, sem que isso tenha causas estritamente biológicas. Segundo o filósofo e músico Bernd M. Scherer, não há como explicar a razão pela qual o canto de um pássaro seja estruturado pela repetição de uma sequência de sons de 1 ou 2 segundos, enquanto outros pássaros cantam em sequências muito mais longas, usando apenas as ideias de marcação de território e atração de fêmeas. Scherer, através de suas pesquisas (que incluem a interação musical, em estilo jazzístico, com pássaros e baleias), está convencido de que há uma dimensão estética presente no canto dos pássaros. Ele afirma, também, que grande parte dos pássaros precisa aprender a cantar, e não nasce com o canto completamente pré-definido geneticamente.

Não há razão para pensar que isso tudo não se aplique também às vacas, porcos e galinhas. Annie Potts, da Universidade de Canterbury, descreve no livro Animals and Society, de Margo DeMello (2012), sua observação da amizade de duas galinhas, Buffy e Mecki, no santuário de galinhas mantido pela pesquisadora. Em determinado momento, Buffy adoeceu, e sua saúde deteriorou-se a ponto de ela não poder mais sair de debaixo de um arbusto. Sua amiga Mecki manteve-se sentada ao seu lado, a despeito de toda a atividade das demais galinhas do santuário, bicando-a suavemente ao redor da face e em suas costas, enquanto emitia sons suaves. Quando Buffy finalmente morreu, Mecki retirou-se para dentro do galinheiro, e por determinado período recusou-se a comer e a acompanhar as outras galinhas em suas atividades. As galinhas são susceptíveis ao luto, conclui Potts.

Quanto mais se pesquisa a existência dos animais – especialmente aves e mamíferos -, mais se conclui que entre eles e nós há apenas diferenças de grau, e não de qualidade. Ambos temos consciência, inteligência, intencionalidade, inventividade, capacidade de improvisação e habilidade no uso de símbolos para a comunicação; ao que parece, os animais não-humanos fazem uso de tais capacidades de forma menos complexa que os humanos, e essa é toda a diferença. Vivemos o momento da descoberta de um admirável mundo novo animal. Nosso mundo tem muito mais subjetividades do que imaginávamos; talvez devêssemos parar de procurar inteligência em outros planetas e começar a olhar mais cuidadosamente ao nosso redor. O problema é que, quando o fazemos, o que vemos não é agradável. Se os animais têm a capacidade de serem sujeitos de suas próprias vidas, como apontam as evidências, ao impedir que o façam os humanos incorrem em ações, no mínimo, eticamente condenáveis.

Voltemos aos argumentos de defesa do uso de animais em laboratórios, citados no início desse texto. A maior parte das razões listadas se funda em razões utilitárias: “assim é mais eficaz; se fizermos de outra forma, perderemos eficiência”. Não se pode fundamentar uma discussão ética sobre pressupostos utilitaristas. Se assim não o fosse, seria aceitável matar um indivíduo saudável para salvar (através da doação de seus órgãos, por exemplo) outros cinco indivíduos doentes. O que boa parte dos cientistas não consegue enxergar é que se trata de um problema que não se resume à dimensão da técnica; trata-se de uma questão política (no sentido filosófico do termo, ou seja, que diz respeito ao problema existencial de seres vivos que coexistem em conflito de interesses).

Mas há outro elemento a pautar, silenciosamente, a lógica da produção científica: a competitividade mercadológica. Na academia, isso se manifesta através do produtivismo exacerbado, onde qualquer alteração metodológica que implique em redução de eficiência no ritmo de pesquisas e publicações encontra resistência. Em laboratórios privados, além da pressa imposta pela concorrência, há a pressão pela redução dos custos de pesquisa. É preciso avançar, a todo custo. Essa percepção do ritmo das coisas parece “natural”, mas não o é: os argumentos falam da colocação em risco das pesquisas que levarão à cura da AIDS ou da criação da vacina para a dengue, como se essas coisas já pré-existissem em algum lugar, e o seu tempo de “descoberta” fosse definido. Isso é uma ficção: não apenas científica, mas também política. As coisas não pré-existem, e o ritmo das coisas não tem nada de “natural”. O tempo é parte da política: é a sociedade quem deve escolher em qual ritmo deve seguir, e é absolutamente legítimo reduzir o ritmo dos avanços técnico-científicos, se as implicações morais para tais avanços forem inaceitáveis.

De todos os cientistas que se pronunciaram nos últimos dias, foi Sidarta Ribeiro, no Estadão do último domingo, o único que colocou, abertamente, o problema de os animais não serem coisas. Mas, para desânimo do leitor, e decepção dos que o admiram, como eu, suas conclusões caíram na vala comum do simplismo burocrático: o problema se resolveu com a criação do aparato burocrático de regulamentação do uso de animais, já mencionado anteriormente, no início desse texto. Ora, se os animais são seres dotados de intencionalidade, inteligência e afeto, e se a plenitude da sua existência depende de vida social complexa, a simples manutenção do seu organismo vivo e (supostamente) sem dor é suficiente para fazer com que eles “não sofram”? Sidarta coloca, de forma acertada, que é preciso atentar para o fato de que coisas muito piores ocorrem na indústria da carne, e também em muitas áreas da existência humana. Mas erra ao criar a impressão de que uma coisa existe em contraposição à outra (algo como “lutem pela humanização dos humanos desumanizados e deixem a ciência em paz”). Todas elas são parte do mesmo problema: a negação do direito a ser sujeito da própria vida. Uma atitude ética coerente implica a não diferenciação de espécie, considerando todos aqueles que efetivamente podem ser sujeitos da própria vida. O resto é escravidão, de animais humanos e não humanos.

Os protocolos de ética em pesquisa com sujeitos humanos foram desenvolvidos após a constatação dos horrores da experimentação médica nazista em judeus. Parece-me inevitável que, em algumas décadas, venhamos a pensar na experimentação com sujeitos-animais em laboratórios com o mesmo sentimento de indignação e horror.

Renzo Taddei é doutor em antropologia pela Universidade de Columbia. É professor da Universidade Federal de São Paulo.

 

A ciência e os beagles (Fórum)

O episódio envolvendo a invasão do Instituto Royal para o resgate de 178 cães da raça beagle até agora gerou mais ruído e discussão sem sentido que oportunidade de reflexão sobre uma questão de enorme importância

30/10/2013 11:51 am

Por Ulisses Capozzoli, no Observatório da Imprensa

O episódio envolvendo a invasão do Instituto Royal para o resgate de 178 cães da raça beagle, em São Roque, a 66 km de São Paulo, ao que tudo indica até agora gerou mais ruído e discussão sem sentido que oportunidade de reflexão sobre uma questão de enorme importância.

A invasão da unidade da empresa e retirada dos animais que serviam de cobaias para experimentos científicos veiculada de maneira um tanto sensacionalista pela mídia até agora só produziu dois blocos antagônicos: um favorável e outro contrário à operação.

A questão, no entanto, é mais complexa e não tem como ser encaminhada de forma promissora com apego, por exemplo, a certa ortodoxia legal, de um lado, e liberdade de ação ilimitada, de outro. Daí a necessidade de uma reflexão mais equilibrada e promissora sobre o caso.

A invasão do Instituto Royal pelos ambientalistas faz sentido de um ponto de vista, digamos, histórico. Mas a operação em si, independentemente de outras consequências, traz riscos que certa ingenuidade dos ambientalistas não considerou.

Vamos a cada uma delas.

Os ambientalistas já haviam estimulado o Ministério Público a se pronunciar sobre a situação da pesquisa com os beagles, mas esse processo, como se sabe, é indesejavelmente lento (Reprodução)

Direito natural

A brutalidade e desamor com animais, especialmente os domésticos e em particular envolvendo cães, tratados pela mídia nos últimos tempos têm sensibilizado toda e qualquer pessoa com um mínimo de percepção e preocupação quanto aos direitos elementares que se deve ter com tudo o que vive: humanos e animais. Nos dois casos, no entanto, o noticiário da TV em horário nobre, e as páginas de jornais e revistas, têm demonstrado a crise de valores em que vivemos e as consequências amplas e complexas dessa situação em termos de violência, brutalidade e desamor.

Animais mutilados, arrastados, presos a carros e motocicletas como forma de punição, espancados como via de liberação de rancor, ódio e outras formas de transgressões patológicas certamente criaram, no conjunto da sociedade, um sentimento de impunidade em relação aos infratores. Sem falar dos odiosos rodeios de espetáculos grotestos, como as chamadas “festas de peão boiadeiro”, cópias precárias do que ocorre no Texas, nos Estados Unidos, e disseminadas pelo país como cogumelos que brotam em qualquer lugar.

Da mesma forma, os relatos de cães que aguardam fielmente pelo retorno de seus donos mortos e que jamais retornarão (caso de um mecânico e de um tratador, entre outros) sensibilizam e sugerem que os animais podem ser mais sensíveis e “generosos” que boa parte dos humanos.

Certamente esse tipo de procedimento esteve presente na decisão dos ambientalistas em invadir a sede do Instituto Royal e liberar os 178 beagles utilizados em experimentos científicos exatamente por serem dóceis e gentis no trato.

Os advogados do instituto alegam que a empresa tinha autorização legal para realizar pesquisas com os animais e que, portanto, os ambientalistas são raivosos, inconsequentes e especialmente criminosos, neste último caso por mais de uma razão. A verdade, no entanto, é que o fato de o Instituto Royal dispor de licença para pesquisa com os beagles é, certamente, uma situação necessária mas não suficiente – e exatamente neste ponto pode estar o núcleo fundamental de toda a questão.

Isso significa que o instituto, seguramente informado do inconformismo dos ambientalistas (nada disso ocorre da noite para o dia sem certa fermentação de ânimos), deveria ter tratado a questão mais cientificamente e, o que significa dizer com mais responsabilidade e eficiência.

E isso não aconteceu.

O Instituto Royal deveria ter convidado um grupo de representante dos ambientalistas, com participação do Ministério Público, para conhecer e discutir a situação dos animais.

E isso não aconteceu.

Os ambientalistas já haviam estimulado o Ministério Público a se pronunciar sobre a situação da pesquisa com os beagles, mas esse processo, como se sabe, é indesejavelmente lento, burocrático e, em boa parte dos casos, absolutamente frustrante. Então, se o Instituto Royal não teve procedimento devido (procedimento científico, pode se dizer) para lidar com o entorno social em sua sede de pesquisa com animais, é preciso que isso seja formalmente reconhecido e que o instituto seja responsabilizado por isso.

E, certamente, o mais importante que apenas a responsabilização do Instituto Royal por isso: que o reconhecimento dessa situação seja capaz de criar um ambiente novo e promissor em relação ao uso de animais como cobaias na produção de medicamentos para humanos.

Pode-se, como defendem alguns, dispensar inteiramente o uso de cobaias vivas nesse tipo de investigação científica?

A resposta, aqui, está muito longe de um puro “sim” ou “não”. É mais complexa e desafiadora. E exatamente por isso deve ser analisada num contexto mais amplo e sempre com a preocupação de evitar, de forma crescente, o uso dessas cobaias vivas.

As razões para isso são de diversas ordens e uma delas é o elementar direito natural de os animais poderem viver de forma digna, da mesma forma que os humanos, ainda que ambos, homens e animais, partilhem neste momento de um mundo violento, insensível e aparentemente sem muita perspectiva de mudança no futuro imediato.

História emblemática

A Sociedade Brasileira para o Progresso da Ciência (SBPC) liberou, semana passada, uma nota para a mídia condenando a invasão do Instituto Royal pelos ambientalistas e isso foi aproveitado pela empresa para se passar por vítima de truculência por parte dos ambientalistas.

Na verdade, a nota da SBPC não foi uma iniciativa muito inteligente, porque foi unilateral, restrita e para ser claro: ortodoxa e formal. Para posicionar-se devidamente num caso como este, a respeitabilíssima SBPC teria a obrigação de fazer uma reflexão mais ampla e colocar a questão na dimensão necessária.

E isso não aconteceu.

Quanto aos ambientalistas, invadindo o laboratório como fizeram, poderiam (ou podem) ter sido vítima de contaminações de que, provavelmente, sequer suspeitaram quando se decidiram pela iniciativa. Essa é uma situação ameaçadora que não pode ser desconsiderada nem em relação ao grupo invasor, nem em termos de saúde pública.

Também o ministro da Ciência, Tecnologia e Inovação, Marco Antonio Raupp (ex-presidente da SBPC) teve uma fala aparentemente mal humorada com jornalistas, quando se referiu a esse acontecimento e condenou de forma unilateral a invasão dos ambientalistas para liberar os 178 beagles cobaia do Instituto Royal. O ministro Raupp, uma pessoa afável, brilhante e com julgamento criterioso e por isso mesmo equilibrado (eu o conheço há tempos e convivi profissionalmente com ele tanto no Instituto Nacional de Pesquisas Espaciais, o Inpe, quanto na própria SBPC), soou autoritário e excludente em sua fala.

E isso foi uma grande pena.

O ministro disse que o momento de se debater o uso ou não de animais como cobaias em laboratórios já havia ficado para trás e com isso desqualificou sumariamente os ambientalistas.

Os fatos, no entanto, em casos como esse, não são definitivos da mesma forma que, em ciência, as coisas também podem não ser definitivas. Uma teoria científica, por exemplo, só pode ser aceita se for refutável – e isso significa que o destino de uma teoria científica é literalmente o de viver na corda bamba.

Numa noite da semana passada, o âncora de um canal de TV aberta e popular se meteu a comentar o caso da invasão do Instituto Royal, aparentemente encorajado pela sumária nota liberada pela SBPC. O fato, no entanto, é que o pobre homem mal sabia do que falava, em um discurso superficial, obscuro, desinformado e por isso mesmo com todo potencial para aumentar a confusão sem lançar uma única semente com possibilidade de frutificar uma perspectiva mais inteligente, necessária e mais bem fundamentada, envolvendo todos os protagonistas de uma história emblemática como a liberação dos 178 pequenos beagles do Instituto Royal em São Roque.

***Ulisses Capozzoli, jornalista especializado em divulgação científica, é mestre e doutor em ciências pela Universidade de São Paulo e editor de Scientific American Brasil

Ainda não há opção a macaco, dizem cientistas (Folha de S.Paulo)

JC e-mail 4846, de 31 de outubro de 2013

Reportagem da Folha repercute entrevista com Esper Kallás, da Faculdade de Medicina da USP

Até agora, não há alternativa aos uso dos macacos para checar se novos tratamentos contra o HIV são seguros o suficiente para serem testados em humanos, segundo Esper Kallás, da Faculdade de Medicina da USP.

Em breve, uma vacina contra o HIV desenvolvida no Brasil começará a ser aplicada em macacos resos no Instituto Butantan.

Michel Nussenzweig, da Universidade Rockefeller, que usa macacos resos em seus estudos, afirma que animais não devem ser usados em pesquisas quando há alternativas.

“Não acho que animais devam ser usados para testar cosméticos. Só quando não houver escolha e quando a pesquisa tem a chance de beneficiar as pessoas.”

O roubo de 178 beagles do Instituto Royal, em São Roque, há quase duas semanas, trouxe o tema da pesquisa em animais à tona. O laboratório usava as cobaias para estudos com medicamentos contra câncer, entre outros.

“Infelizmente, não teria outra forma de fazer esse estudo [sobre HIV] sem os macacos. Levo isso muito a sério. Não podemos abusar dos animais. Tentamos criar as condições mais humanas possíveis durante os testes.”

Segundo Kallás, pesquisador nenhum gosta de sacrificar animais, mas é preciso pesar custo e benefício.

“São 35 milhões de pessoas com HIV no mundo. Até hoje, quantos macacos foram usados em pesquisas? Um número infinitamente menor. Ninguém gosta de testar macaco. Mas quais são as prioridades da saúde pública brasileira e mundial?”

O professor de imunologia da USP, que realiza pesquisas com seres humanos, afirma que a regulamentação brasileira já é bem rigorosa para os testes com animais e com pessoas.

Para ele, a demora na aprovação dos testes clínicos chega a ser excessiva. “O rigor aqui é maior do que lá fora. Acabamos sofrendo com isso, demoro um ano e meio para aprovar um teste clínico.”

Kallás afirma que quem faz pesquisa no Brasil hoje está “esmagado” entre o debate da sociedade sobre o uso de cobaias e a burocracia necessária para aprovar os testes.

“Esses movimentos [contra pesquisa em animais] já aconteceram na Europa e nos EUA há 20 anos. Sempre tem alguém que acha que salvar um coelho é mais importante do que salvar uma pessoa.”

http://www1.folha.uol.com.br/fsp/cienciasaude/136538-ainda-nao-ha-opcao-a-macaco-dizem-cientistas.shtml

An interview with Alan Greenspan (FT)

October 25, 2013 10:41 am

By Gillian Tett

Six years on from the start of the credit crisis, the former US Federal Reserve chairman is prepared to admit that he got it wrong – at least in part

Alan Greenspan©Stefan Ruiz

Acouple of years ago I bumped into Alan Greenspan, the former chairman of the US Federal Reserve, in the lofty surroundings of the Aspen Institute Ideas Festival. As we chatted, the sprightly octogenarian declared that he was becoming interested in social anthropology – and wanted to know what books to read.

“Anthropology?” I retorted, in utter amazement. It appeared to overturn everything I knew (and criticised) about the man. Greenspan, after all, was somebody who had trained as an ultraorthodox, free-market economist and was close to Ayn Rand, the radical libertarian novelist. He was (in) famous for his belief that the best way to run an economy was to rely on rational actors competing in open markets. As Fed chair, he seemed to worship mathematical models and disdain “soft” issues such as human culture.

But Greenspan was serious; he wanted to venture into new intellectual territory, he explained. And that reflected a far bigger personal quest. Between 1987 and 2006, when he led the Fed, Greenspan was highly respected. Such was his apparent mastery over markets – and success in delivering stable growth and low inflation – that Bob Woodward, the Washington pundit, famously described him as a “maestro”. Then the credit crisis erupted in 2007 and his reputation crumbled, with critics blaming him for the bubble. Greenspan denied any culpability. But in late 2008, he admitted to Congress that the crisis had exposed a “flaw” in his world view. He had always assumed that bankers would act in ways that would protect shareholders – in accordance with free-market capitalist theory – but this presumption turned out to be wrong.

In the months that followed, Greenspan started to question and explore many things – including the unfamiliar world of anthropology and psychology. Hence our encounter in Aspen.

Was this just a brief intellectual wobble, I wondered? A bid for sympathy from a man who had gone from hero to zero in investors’ eyes? Or was it possible that a former “maestro” of free markets could change his mind about how the world worked? And if so, what does that imply for the discipline of economics, let alone Greenspan’s successors in the policy making world – such as Janet Yellen, nominated as the new head of the Fed?

Earlier this month I finally got a chance to seek some answers when I stepped into a set of bland, wood-panelled offices in the heart of Washington. Ever since Greenspan left the imposing, marble-pillared Fed, this suite has been his nerve centre. He works out of a room dubbed the “Oval Office” due to its shape. It is surprisingly soulless: piles of paper sit on the windowsill next to a bust of Abraham Lincoln. One flash of colour comes from a lurid tropical beach scene that he has – somewhat surprisingly – installed as a screen saver.

“If you are not going to have numbers on your screen, you might as well have something nice to look at,” he laughs, spreading his large hands expansively in the air. Then, just in case I might think that he is tempted to slack off at the age of 87, he stresses that “I do play tennis twice a week – but my golf game is in the soup. I haven’t had time to get out.” Or, it seems, daydream on a beach. “I get so engaged when I have a problem you cannot solve, that I just cannot break away from what I am doing – I keep thinking and thinking and cannot stop.”

The task that has kept him so busy is his new book, The Map and the Territory,published this month and a successor to an earlier memoir, The Age of Turbulence.To the untrained eye, this title might seem baffling. But to Greenspan, the phrase is highly significant. For what his new manuscript essentially does is explain his intellectual journey since 2007. Most notably it shows why he now thinks that the “map” that he (and many others) once used to analyse finance is incomplete – and what this means for anyone wanting to navigate today’s economic “territory”.

Greenspan in the 'Oval Office' of his Washington workplace©Stefan RuizGreenspan in the ‘Oval Office’ of his Washington workplace

This is not quite the mea culpa that some people who are angry about the credit bubble would like to see. Greenspan is a man who built his career by convincing people that he was correct. Born in New York to a family of east European Jewish ancestry, he trained as an economist and, before he was appointed by Ronald Reagan to run the Fed, was an economic consultant on Wall Street (interspersed with a brief spell working for the Nixon administration). This background once made him lauded; today it seems more of a liability, at least in the eyes of the political left. “Before [2007] I was embarrassed by the adulation – they made me a rock star,” he says. “But I knew then that I was being praised for something I didn’t really do. So after, when I got hammered, it kind of balanced out, since I don’t think I deserved the criticism either … I am a human so I feel it but not as much as some.”

Yet in one respect, at least, Greenspan has had a change of heart: he no longer thinks that classic orthodox economics and mathematical models can explain everything. During the first six decades of his career, he thought – or hoped – that Homo economicus was a rational being and that algorithms could forecast behaviour. When he worked on Wall Street he loved creating models and when he subsequently joined the Fed he believed the US central bank was brilliantly good at this. “The Fed model was as advanced as you could possibly get it,” he recalls. “All the new concepts with every theoretical advance was embodied in that model – rational expectations, monetarism, all sorts of sophisticated means of thinking about how the economy worked. The Fed has 250 [economic] PhDs in that division and they are all very smart.”

And yet in September 2008, this pride was shattered when those venerated models suddenly stopped working. “The whole period upset my view of how the world worked – the models failed at a time when we needed them most … and the failure was uniform,” he recalls, shaking his head. “JPMorgan had the American economy accelerating three days before [the collapse of Lehman Brothers] – their model failed. The Fed model failed. The IMF model failed. I am sure the Goldman model also missed it too.

“So that left me asking myself what has happened? Are we living in an unreal world which has a model which is supposed to replicate the economy but gets caught out by one of the most extraordinary events in history?”

Shocked, Greenspan spent the subsequent months trying to answer his own question. He crunched and re-crunched his beloved algorithms, scoured the data and tested his ideas. It was not the first time he had engaged in intellectual soul-searching: in his youth he had once ascribed to intellectual positivism, until Rand, the libertarian, persuaded him those ideas were wrong. However, this was more radical. Greenspan was losing faith in “the presumption of neoclassical economics that people act in rational self-interest”. “To me it suddenly seemed that the whole idea of taking the maths as the basis of pricing that system failed. The whole structure of risk evaluation – what they call the ‘Harry Markowitz approach’ – failed,” he observes, referring to the influential US economist who is the father of modern portfolio management. “The rating agency failed completely and financial services regulation failed too.”

But if classic models were no longer infallible, were there alternative ways to forecast an economy? Greenspan delved into behavioural economics, anthropology and psychology, and the work of academics such as Daniel Kahneman. But those fields did not offer a magic wand. “Behavioural economics by itself gets you nowhere and the reason is that you cannot create a macro model based on just [that]. To their credit, behavioural economists don’t [even] claim they can,” he points out.

Alan Greenspan with Ayn Rand©GettyIn 1974 with friend and inspiration, the writer Ayn Rand

But as the months turned into years, Greenspan slowly developed a new intellectual framework. This essentially has two parts. The first half asserts that economic models still work in terms of predicting behaviour in the “real” economy: his reading of past data leaves him convinced that algorithms can capture trends in tangible items like inventories. “In the non-financial part of the system [rational economic theory] works very well,” he says. But money is another matter: “Finance is wholly different from the rest the economy.” More specifically, while markets sometimes behave in ways that models might predict, they can also become “irrational”, driven by animal spirits that defy maths.

Greenspan partly blames that on the human propensity to panic. “Fear is a far more dominant force in human behaviour than euphoria – I would never have expected that or given it a moment’s thought before but it shows up in the data in so many ways,” he says. “Once you get that skewing in [statistics from panic] it creates the fat tail.” The other crucial issue is what economists call “leverage” (more commonly dubbed “debt”). When debt in an economy is low, then finance is “neutral” in economic terms and can be explained by models, Greenspan believes. But when debt explodes, this creates fragility – and that panic. “The very nature of finance is that it cannot be profitable unless it is significantly leveraged … and as long as there is debt there can be failure and contagion.”

A cynic might complain that it is a pity Greenspan did not spot that “flaw” when he was running the Fed and leverage was exploding. He admits that he first saw how irrational finance could become as long ago as the 1950s and 1960s when he briefly tried, as a young New York economist, to trade commodity markets. Back then he thought he could predict cotton values “from the outside, looking at supply-demand forces”. But when he actually “bought a seat in the market and did a lot of trading” he discovered that rational logic did not always rule. “There were a couple of guys in that exchange who couldn’t tell a hide from copper sheeting but they made a lot of money. Why? They weren’t trading a commodity but human nature … and there is something about human nature which is not rational.”

Half-a-century later, when Greenspan was running the Fed, he had seemingly come to assume that markets would always “self-correct”, in a logical manner. Thus he did not see any reason to prick bubbles or control excessive exuberance by government fiat. “If bubbles are not leveraged, they can be highly disruptive to the wealth of people who own assets but there are not really any secondary consequences,” he explains, pointing out that the stock market bubble of the late 1980s and tech bubble of the late 1990s both deflated – without real economic damage. “It is only when you have leverage that a collapse in values becomes so contagious.”

Of course, the tragedy of the noughties credit bubble was that this bout of exuberance – unlike 1987 or 2001 – did involve leverage on a massive scale. Greenspan, for his part, denies any direct culpability for this. Though critics have carped that he cut rates in 2001, and thus created easy money, he points out that from 2003 onwards the Fed, and other central banks, were diligently raising interest rates. But even “when we raised [official] rates, long-term rates went down – bond prices were very high”, he argues, blaming this “conundrum” on the fact that countries such as China were experiencing “a huge increase in savings, all of which spilled into the developed world and the global bond market at that time”. But whatever the source of this leverage, one thing is clear: Greenspan, like his critics, now agrees that this tidal wave of debt meant that classic economic theory became impotent to forecast how finance would behave. “We have a system of finance which is far too leveraged – [the models] cannot work in this context.”

So what does that mean for institutions such as the Fed? When I arrived to interview Greenspan, the television screens were filled with the face of Yellen. What advice would he give her? Should she rip up all the Fed’s sophisticated models? Hire psychologists or anthropologists instead?

Alan Greenspan with former US President George W. Bush©Chuck KennedyMay 2004. Greenspan is nominated as Fed chairman for an unprecedented fifth term by President George W. Bush

For the first time during our two-hour conversation, Greenspan looks nonplussed. “It never entered my mind – it’s almost too presumptuous of me to say. I haven’t thought about it.” Really? I press him. He shakes his head vigorously. And then he slides into diplomatic niceties. One unspoken, albeit binding, rule of central banking is that the current and former incumbents of the top jobs never criticise each other in public. “Yellen is a great economist, a wonderful person,” he insists.

But tact cannot entirely mask Greenspan’s deep concern that six years after the leverage-fuelled crisis, there is even more debt in the global financial system and even easier money due to quantitative easing. And later he admits that the Fed faces a “brutal” challenge in finding a smooth exit path. “I have preferences for rates which are significantly above where they are,” he observes, admitting that he would “hardly” be tempted to buy long-term bonds at their current rates. “I run my own portfolio and I am not long [ie holding] 30-year bonds.”

But even if Greenspan is wary of criticising quantitative easing, he is more articulate about banking. Most notably, he is increasingly alarmed about the monstrous size of the debt-fuelled western money machine. “There is a very tricky problem we don’t know how to solve or even talk about, which is an inexorable rise in the ratio of finance and financial insurance as a ratio of gross domestic income,” he says. “In the 1940s it was 2 per cent of GDP – now it is up to 8 per cent. But it is a phenomenon not indigenous to the US – it is everywhere.

“You would expect that with the 2008 crisis, the share of finance in the economy would go down – and it did go down for a while. But then it bounced back despite the fact that finance was held in such terrible repute! So you have to ask: why are the non-financial parts of the economy buying these services? Honestly, I don’t know the answer.”

What also worries Greenspan is that this swelling size has gone hand in hand with rising complexity – and opacity. He now admits that even (or especially) when he was Fed chairman, he struggled to track the development of complex instruments during the credit bubble. “I am not a neophyte – I have been trading derivatives and things and I am a fairly good mathematician,” he observes. “But when I was sitting there at the Fed, I would say, ‘Does anyone know what is going on?’ And the answer was, ‘Only in part’. I would ask someone about synthetic derivatives, say, and I would get detailed analysis. But I couldn’t tell what was really happening.”

This last admission will undoubtedly infuriate critics. Back in 2005 and 2006, Greenspan never acknowledged this uncertainty. On the contrary, he kept insisting that financial innovation was beneficial and fought efforts by other regulators to rein in the more creative credit products emerging from Wall Street. Even today he remains wary of government control; he does not want to impose excessive controls on derivatives, for example.

But what has changed is that he now believes banks should be forced to hold much thicker capital cushions. More surprising, he has come to the conclusion that banks need to be smaller. “I am not in favour of breaking up the banks but if we now have such trouble liquidating them I would very reluctantly say we would be better off breaking up the banks.” He also thinks that finance as a whole needs to be cut down in size. “Is it essential that the division of labour [in our economy] requires an ever increasing amount of financial insight? We need to make sure that the services that non-financial services buy are not just ersatz or waste,” he observes with a wry chuckle.

Alan Greenspan with wife Andrea Mitchell©GettyIn 2004 with wife Andrea Mitchell

There is a profound irony here. In some senses, Greenspan remains an orthodox pillar of ultraconservative American thought: The Map and the Territory rails against fiscal irresponsibility, the swelling social security budget and the entitlement culture. And yet he, like his leftwing critics, now seems utterly disenchanted with Wall Street and the extremities of free-market finance – never mind that he championed them for so many years.

Perhaps this just reflects an 87-year-old man who is trying to make sense of the extreme swings in his reputation. I prefer to think, though, that it reflects a mind that – to his credit – remains profoundly curious, even after suffering this rollercoaster ride. When I say to him that I greatly admire his spirit of inquiry – even though I disagree with some conclusions – he immediately peppers me with questions. “Tell me what you disagree with – please. I really want to hear,” he insists, with a smile that creases his craggy face. As someone who never had children, his books now appear to be his real babies; the only other subject which inspires as much passion is when I mention his adored second wife, Andrea Mitchell, the television journalist.

But later, after I have left, it occurs to me that the real key to explaining the ironies and contradictions that hang over Greenspan is that he has – once again – unwittingly become a potent symbol of an age. Back in the days of the “Great Moderation” – the period of reduced economic volatility starting in the 1980s – most policy makers shared his sunny confidence in 20th-century progress. There was a widespread assumption that a mixture of free market capitalism, innovation and globalisation had made the world a better place. Indeed, it was this very confidence that laid the seeds of disaster. Today, however, that certainty has crumbled; the modern political and economic ecosystem is marked by a culture of doubt and cynicism. Nobody would dare call Yellen “maestro” today; not when the Fed (and others) are tipping into such uncharted territory. This delivers some benefits: Greenspan himself now admits this pre-2007 confidence was an Achilles heel. “Beware of success in policy,” he observes, laughing. “A stable, moderately growing, non-inflationary environment will create a bubble 100 per cent of the time.”

But a world marked by profound uncertainty is also a deeply disconcerting and humbling place. Today there are no easy answers or straightforward heroes or villains, be that among economists, anthropologists or anyone else. Perhaps the biggest moral of The Map and the Territory is that in a shifting landscape, we all need to keep challenging our assumptions and prejudices. And not just at the age of 87.

gillian.tett@ft.com; ‘The Map and the Territory: Risk, Human Nature, and the Future of Forecasting‘ by Alan Greenspan is published by Penguin.

 

Animais, plantas, natureza: os direitos do meio ambiente. Entrevista com Philippe Descola (Unisinos)

Por Marino Niola

Philippe Descola herdou a cátedra de Lévi-Strauss em Paris. E conta como a disciplina está evoluindo: “Há muitas formas de vida. Temos que levar isso em conta”. Em alguns países, a proteção e o respeito pelos recursos vitais foram incluídos na Constituição. É preciso aprender a coabitar.

A antropologia de Lévi-Strauss era uma grande teoria sobre o ser humano. A antropologia de hoje, ao contrário, deve ir além do humano. O ser humano sozinho não lhe basta mais. Porque natureza e cultura são uma só coisa. Sociedade e meio ambiente, uma só casa. As neurociências, a etologia, a genética, a ecologia falam claramente. Nós, bípedes, com o dom da palavra, não somos o umbigo do mundo, mas sim parte da vida, quer gostemos ou não.

Philippe Descola sorri maliciosamente. Ele assumiu o lugar de Lévi-Strauss na cátedra de antropologia mais prestigiada do planeta. A do Collège de France. Tudo aqui ainda fala do mestre que revolucionou as ciências do ser humano. Livros, estantes, objetos exóticos descritos precisamente em Tristes Trópicos. “Obviamente, eu não sou o herdeiro de Claude Lévi-Strauss, mas só o seu sucessor”, explica, com bom humor.

Eis a entrevista.

Um homem que tinha uma imensa e preciosa erudição, de savant de outros tempos.

E que não é mais de hoje. A sua análise dos mitos é um virtuosismo acrobático. Obras como O Pensamento Selvagem e O cru e o cozido são o produto de um talento pessoal muito próximo ao de um artista. Ele era capaz de se lembrar de um fragmento de um conto japonês lido 20 anos antes e de conectá-lo aos mitos dos nativos da América ou da Grécia que ele estudava naquele momento. Ou a um acorde da tetralogia de Wagner.

Lévi-Strauss fez da antropologia um dos grandes saberes do século XX. Ele demonstrou que, por trás das diferenças entre as culturas, há analogias escondidas que permitem remeter a miríade de diversidades a poucas leis gerais, comuns a todos os seres humanos.

Ele tratava as diferenças entre as culturas como variações de um mesmo tema musical. E a sua grande lição é que a tarefa da antropologia é ir além das diferenças superficiais, além da etnografia, para alcançar aquilo que nos torna todos igualmente humanos.

Ou até todos seres vivos. Humanos e não humanos. Nisso, Lévi-Strauss antecipou aquele sentimento de unidade entre sociedade e natureza, que envolve milhões de cidadãos globais. Não é por acaso que o senhor preferiu rebatizar a sua cátedra como “Antropologia e natureza”, tornando-se assim continuador do Lévi-Strauss mais atual e profético.

O fato é que os homens não estão sozinhos no palco da humanidade. E o resto, aquilo que normalmente se chama de natureza ou meio ambiente, não é propriedade nossa, nem uma projeção nossa, muito menos um simples recurso à disposição do nosso desenvolvimento. As outras criaturas, animais, plantas, minerais, também são coinquilinos do mundo. Não são coisas ou formas de vida, mas sim verdadeiros agentes sociais, que têm os mesmos direitos que os seres humanos. E muitas vezes características em comum, que não são meramente biológicas, mas até culturais. É por isso que hoje a antropologia não pode mais se limitar ao ser humano, mas deve estender o seu olhar a todos os seres com os quais interagimos e convivemos.

E, além disso, a nossa ideia de natureza é relativamente recente.

Ela começa a se desenvolver só no século XVII, no início da modernidade, quando o mundo foi dividido em duas partes. De um lado, o universo das convenções e das regras, ou seja, a cultura. De outro, o mundo dos fenômenos e das leis da natureza.

De um lado, a pessoa humana, de outro, as não pessoas, isto é, todo o resto. Mas, desse modo, o ser vivo é cortado em dois e separados de uma parte de si mesmo. Essa foi a concepção que legitimou a dominação e a exploração do ser humano, assim como da natureza?

Certamente. Além de tudo isso, essa oposição entre cultura e natureza, entre ser humano e as outras criaturas, não é nem universal. Muitos povos não a compartilham. Basta pensar no primeiro capítulo da nova Constituição do Equador, que protege precisamente os direitos da natureza, em que a natureza, diferentemente de nós, aparece como uma espécie de pessoa viva. Justamente como a Pachamama, a mãe terra das religiões mesoamericanas.

Não por acaso, o presidente boliviano, Evo Morales, e uma cúpula latino-americana reconheceram que os ecossistemas enquanto tais têm direitos. Um modo diferente de sistematizar os problemas, que, também à luz de dramas como o do Chifre da África, deveria começar a influenciar a agenda política planetária, especialmente em matéria de bens comuns.

Em muitos países do mundo, é inconcebível que os recursos vitais sejam privatizados. A própria ideia de que existe um mercado dos bens de subsistência é um caso excepcional na história da humanidade. Aristóteles, na Crematística, a ciência da riqueza, já punha em questão a legitimidade da compra e venda dos bens indispensáveis para a sobrevivência. O que é interessante é que hoje cada vez mais pessoas tomam consciência do fato de que alguns recursos são intocáveis, porque não pertencem só aos seres humanos, mas a todos os seres vivos. E até ao conjunto dos ecossistemas inteiros.

Isto é, ao planeta na sua totalidade indivisível, na sua integridade vital que também nos compreende, enquanto nascidos da terra.

Nesse sentido, a antropologia tem uma tarefa importante, que é a de apresentar outros modelos de humanidade. Mostrar de que modo as outras civilizações enfrentaram e resolveram problemas análogos aos nossos.

Quais são as três grandes urgências do nosso tempo?

Ecologia, tecnologia e coexistência com as outras civilizações. Três questões que podem ser resumidas em uma, isto é, como fazer com que todos os ocupantes do planeta coabitem, sem muitos danos, renúncias e conflitos. E se não se chegar a isso, haverá uma catástrofe. Ambiental, demográfica e informática.

Por que informática?

Porque deveremos ser inundados por uma avalanche de informações cada vez mais incontroláveis, incongruentes, perigosas.

Também seremos inundados por montanhas de lixo digital, enfim. Mas a política lhe parece estar à altura da tarefa?

Infelizmente não. Hoje, eu vejo uma grande pusilanimidade nos políticos e nos vários G7 ou G20. Não possuem coragem e imaginação. Estão sempre atrasados com relação à realidade. Também porque subestimam o papel da cultura nas elaboração das políticas sociais e ambientais. E, frequentemente, não se vai muito além de alguns pequenos pensamentos politicamente corretos sobre a necessidade do diálogo entre as culturas. Mas não acredito nisso, verdadeiramente.

As pessoas comuns parecem acreditar nisso cada vez mais. Os movimentos que agitam o mundo neste período, que parecem fatos separados, não são talvez os sintomas de um novo sentido comum?

Sim, cada vez mais pessoas estão conscientes de que o modelo de desenvolvimento que tem governado o mundo nestes últimos dois séculos está se desfazendo. Eu diria que esses movimentos são exercícios no futuro, os primeiros passos para uma nova democracia global.

Scientists Eye Longer-Term Forecasts of U.S. Heat Waves (Science Daily)

Oct. 27, 2013 — Scientists have fingerprinted a distinctive atmospheric wave pattern high above the Northern Hemisphere that can foreshadow the emergence of summertime heat waves in the United States more than two weeks in advance.

This map of air flow a few miles above ground level in the Northern Hemisphere shows the type of wavenumber-5 pattern associated with US drought. This pattern includes alternating troughs (blue contours) and ridges (red contours), with an “H” symbol (for high pressure) shown at the center of each of the five ridges. High pressure tends to cause sinking air and suppress precipitation, which can allow a heat wave to develop and intensify over land areas. (Credit: Image courtesy Haiyan Teng.)

The new research, led by scientists at the National Center for Atmospheric Research (NCAR), could potentially enable forecasts of the likelihood of U.S. heat waves 15-20 days out, giving society more time to prepare for these often-deadly events.

The research team discerned the pattern by analyzing a 12,000-year simulation of the atmosphere over the Northern Hemisphere. During those times when a distinctive “wavenumber-5” pattern emerged, a major summertime heat wave became more likely to subsequently build over the United States.

“It may be useful to monitor the atmosphere, looking for this pattern, if we find that it precedes heat waves in a predictable way,” says NCAR scientist Haiyan Teng, the lead author. “This gives us a potential source to predict heat waves beyond the typical range of weather forecasts.”

The wavenumber-5 pattern refers to a sequence of alternating high- and low-pressure systems (five of each) that form a ring circling the northern midlatitudes, several miles above the surface. This pattern can lend itself to slow-moving weather features, raising the odds for stagnant conditions often associated with prolonged heat spells.

The study is being published next week in Nature Geoscience. It was funded by the U.S. Department of Energy, NASA, and the National Science Foundation (NSF), which is NCAR’s sponsor. NASA scientists helped guide the project and are involved in broader research in this area.

Predicting a lethal event

Heat waves are among the most deadly weather phenomena on Earth. A 2006 heat wave across much of the United States and Canada was blamed for more than 600 deaths in California alone, and a prolonged heat wave in Europe in 2003 may have killed more than 50,000 people.

To see if heat waves can be triggered by certain large-scale atmospheric circulation patterns, the scientists looked at data from relatively modern records dating back to 1948. They focused on summertime events in the United States in which daily temperatures reached the top 2.5 percent of weather readings for that date across roughly 10 percent or more of the contiguous United States. However, since such extremes are rare by definition, the researchers could identify only 17 events that met such criteria — not enough to tease out a reliable signal amid the noise of other atmospheric behavior.

The group then turned to an idealized simulation of the atmosphere spanning 12,000 years. The simulation had been created a couple of years before with a version of the NCAR-based Community Earth System Model, which is funded by NSF and the Department of Energy.

By analyzing more than 5,900 U.S. heat waves simulated in the computer model, they determined that the heat waves tended to be preceded by a wavenumber-5 pattern. This pattern is not caused by particular oceanic conditions or heating of Earth’s surface, but instead arises from naturally varying conditions of the atmosphere. It was associated with an atmospheric phenomenon known as a Rossby wave train that encircles the Northern Hemisphere along the jet stream.

During the 20 days leading up to a heat wave in the model results, the five ridges and five troughs that make up a wavenumber-5 pattern tended to propagate very slowly westward around the globe, moving against the flow of the jet stream itself. Eventually, a high-pressure ridge moved from the North Atlantic into the United States, shutting down rainfall and setting the stage for a heat wave to emerge.

When wavenumber-5 patterns in the model were more amplified, U.S. heat waves became more likely to form 15 days later. In some cases, the probability of a heat wave was more than quadruple what would be expected by chance.

In follow-up work, the research team turned again to actual U.S. heat waves since 1948. They recognized that some historical heat wave events are indeed characterized by a large-scale circulation pattern that indicated a wavenumber-5 event.

Extending forecasts beyond 10 days

The research finding suggests that scientists are making progress on a key meteorological goal: forecasting the likelihood of extreme events more than 10 days in advance. At present, there is very limited skill in such long-term forecasts.

Previous research on extending weather forecasts has focused on conditions in the tropics. For example, scientists have found that El Niño and La Niña, the periodic warming and cooling of surface waters in the central and eastern tropical Pacific Ocean, are correlated with a higher probability of wet or dry conditions in different regions around the globe. In contrast, the wavenumber-5 pattern does not rely on conditions in the tropics. However, the study does not exclude the possibility that tropical rainfall could act to stimulate or strengthen the pattern.

Now that the new study has connected a planetary wave pattern to a particular type of extreme weather event, Teng and her colleagues will continue searching for other circulation patterns that may presage extreme weather events.

“There may be sources of predictability that we are not yet aware of,” she says. “This brings us hope that the likelihood of extreme weather events that are damaging to society can be predicted further in advance.”

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this release are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Journal Reference:

  1. Haiyan Teng, Grant Branstator, Hailan Wang, Gerald A. Meehl, Warren M. Washington. Probability of US heat waves affected by a subseasonal planetary wave patternNature Geoscience, 2013; DOI: 10.1038/ngeo1988

Will U.S. Hurricane Forecasting Models Catch Up to Europe’s? (National Geographic)

Photo of a satellite view of Hurricane Sandy on October 28, 2012.

A satellite view of Hurricane Sandy on October 28, 2012.

Photograph by Robert Simmon, ASA Earth Observatory and NASA/NOAA GOES Project Science team

Willie Drye

for National Geographic

Published October 27, 2013

If there was a bright spot amid Hurricane Sandy’s massive devastation, including 148 deaths, at least $68 billion in damages, and the destruction of thousands of homes, it was the accuracy of the forecasts predicting where the storm would go.

Six days before Sandy came ashore one year ago this week—while the storm was still building in the Bahamas—forecasters predicted it would make landfall somewhere between New Jersey and New York City on October 29.

They were right.

Sandy, which had by then weakened from a Category 2 hurricane to an unusually potent Category 1, came ashore just south of Atlantic City, a few miles from where forecasters said it would, on the third to last day of October.

“They were really, really excellent forecasts,” said University of Miami meteorologist Brian McNoldy. “We knew a week ahead of time that something awful was going to happen around New York and New Jersey.”

That knowledge gave emergency management officials in the Northeast plenty of time to prepare, issuing evacuation orders for hundreds of thousands of residents in New Jersey and New York.

Even those who ignored the order used the forecasts to make preparations, boarding up buildings, stocking up on food and water, and buying gasoline-powered generators.

But there’s an important qualification about the excellent forecasts that anticipated Sandy’s course: The best came from a European hurricane prediction program.

The six-day-out landfall forecast arrived courtesy of a computer program known as the European Centre for Medium-range Weather Forecasting (ECMWF), which is based in England.

Most of the other models in use at the National Hurricane Center in Miami, including the U.S. Global Forecast System (GFS), didn’t start forecasting a U.S. landfall until four days before the storm came ashore. At the six-day-out mark, that model and others at the National Hurricane Center had Sandy veering away from the Atlantic Coast, staying far out at sea.

“The European model just outperformed the American model on Sandy,” says Kerry Emanuel, a meteorologist at Massachusetts Institute of Technology.

Now, U.S. weather forecasting programmers are working to close the gap between the U.S. Global Forecast System and the European model.

There’s more at stake than simple pride. “It’s to our advantage to have two excellent models instead of just one,” says McNoldy. “The more skilled models you have running, the more you know about the possibilities for a hurricane’s track.”

And, of course, the more lives you can save.

Data, Data, Data

The computer programs that meteorologists rely on to predict the courses of storms draw on lots of data.

U.S. forecasting computers and their European counterparts rely on radar that provides information on cloud formations and the rotation of a storm, on orbiting satellites that show precisely where a storm is, and on hurricane-hunter aircraft that fly into storms to collect wind speeds, barometric pressure readings, and water temperatures.

Hundreds of buoys deployed along the Atlantic and Gulf coasts, meanwhile, relay information about the heights of waves being produced by the storm.

All this data is fed into computers at the National Centers for Environmental Prediction at Camp Springs, Maryland, which use it to run the forecast models. Those computers, linked to others at the National Hurricane Center, translate the computer models into official forecasts.

The forecasters use data from all computer models—including the ECMWF—to make their forecasts four times daily.

Forecasts produced by various models often diverge, leaving plenty of room for interpretation by human forecasters.

“Usually, it’s kind of a subjective process as far as making a human forecast out of all the different computer runs,” says McNoldy. “The art is in the interpretation of all of the computer models’ outputs.”

There are two big reasons why the European model is usually more accurate than U.S. models. First, the European Centre for Medium-range Weather Forecasting model is a more sophisticated program that incorporates more data.

Second, the European computers that run the program are more powerful than their U.S. counterparts and are and able to do more calculations more quickly.

“They don’t have any top-secret things,” McNoldy said. “Because of their (computer) hardware, they can implement more sophisticated code.”

A consortium of European nations began developing the ECMWF in 1976, and the model has been fueled by a series of progressively more powerful supercomputers in England. It got a boost when the European Union was formed in 1993 and member states started contributing taxes for more improvements.

The ECMWF and the GFS are the two primary models that most forecasters look at, said Michael Laca, producer of TropMet, a website that focuses on hurricanes and other severe weather events.

Laca said that forecasts and other data from the ECMWF are provided to forecasters in the U.S. and elsewhere who pay for the information.

“The GFS, on the other hand, is freely available to everyone, and is funded—or defunded—solely through (U.S.) government appropriations,” Laca said.

And since funding for U.S. research and development is subject to funding debates in Congress, U.S. forecasters are “in a hard position to keep pace with the ECMWF from a research and hardware perspective,” Laca said.

Hurricane Sandy wasn’t the first or last hurricane for which the ECMWF was the most accurate forecast model. It has consistently outperformed the GFS and four other U.S. and Canadian forecasting models.

Greg Nordstrom, who teaches meteorology at Mississippi State University in Starkville, said the European model provided much more accurate forecasts for Hurricane Isaac in August 2012 and for Tropical Storm Karen earlier this year.

“This doesn’t mean the GFS doesn’t beat the Euro from time to time,” he says.  “But, overall, the Euro is king of the global models.”

McNoldy says the European Union’s generous funding of research and development of their model has put it ahead of the American version. “Basically, it’s a matter of resources,” he says. “If we want to catch up, we will. It’s important that we have the best forecasting in the world.”

European developers who work on forecasting software have also benefited from better cooperation between government and academic researchers, says MIT’s Emanuel.

“If you talk to (the National Oceanic and Atmospheric Administration), they would deny that, but there’s no real spirit of cooperation (in the U.S.),” he says. “It’s a cultural problem that will not get fixed by throwing more money at the problem.”

Catching Up Amid Chaos

American computer models’ accuracy in forecasting hurricane tracks has improved dramatically since the 1970s. The average margin of error for a three-day forecast of a hurricane’s track has dropped from 500 miles in 1972 to 115 miles in 2012.

And NOAA is in the middle of a ten-year program intended to dramatically improve the forecasting of hurricanes’ tracks and their likelihood to intensify, or become stronger before landfall.

One of the project’s centerpieces is the Hurricane Weather Research and Forecasting model, or HWRF. In development since 2007, it’s similar to the ECMWF in that it will incorporate more data into its forecasting, including data from the GFS model.

Predicting the likelihood that a hurricane will intensify is difficult. For a hurricane to gain strength, it needs humid air, seawater heated to at least 80ºF, and no atmospheric winds to disrupt its circulation.

In 2005, Hurricane Wilma encountered those perfect conditions and in just 30 hours strengthened from a tropical storm with peak winds of about 70 miles per hour to the most powerful Atlantic hurricane on record, with winds exceeding 175 miles per hour.

But hurricanes are as delicate as they are powerful. Seemingly small environmental changes, like passing over water that’s slightly cooler than 80ºF or ingesting dryer air, can rapidly weaken a storm. And the environment is constantly changing.

“Over the next five years, there may be some big breakthrough to help improve intensification forecasting,” McNoldy said. “But we’re still working against the basic chaos in the atmosphere.”

He thinks it will take at least five to ten years for the U.S. to catch up with the European model.

MIT’s Emanuel says three factors will determine whether more accurate intensification forecasting is in the offing: the development of more powerful computers that can accommodate more data, a better understanding of hurricane intensity, and whether researchers reach a point at which no further improvements to intensification forecasting are possible.

Emanuel calls that point the “prediction horizon” and says it may have already been reached: “Our level of ignorance is still too high to know.”

Predictions and Responses

Assuming we’ve not yet hit that point, better predictions could dramatically improve our ability to weather hurricanes.

The more advance warning, the more time there is for those who do choose to heed evacuation orders. Earlier forecasting would also allow emergency management officials more time to provide transportation for poor, elderly, and disabled people unable to flee on their own.

More accurate forecasts would also reduce evacuation expenses.

Estimates of the cost of evacuating coastal areas before a hurricane vary considerably, but it’s been calculated that it costs $1 million for every mile of coastline evacuated. That includes the cost of lost commerce, wages and salaries by those who leave, and the costs of actual evacuating, like travel and shelter.

Better forecasts could reduce the size of evacuation areas and save money.

They would also allow officials to get a jump on hurricane response.  The Federal Emergency Management Administration tries to stockpile relief supplies far enough away from an expected hurricane landfall to avoid damage from the storm but near enough so that the supplies can quickly be moved to affected areas afterwards.

More reliable landfall forecasts would help FEMA position recovery supplies closer to where they’ll be.

Whatever improvements are made, McNoldy warns that forecasting will never be foolproof. However dependable, he said, “Models will always be imperfect.”

The cultures endangered by climate change (PLOS)

Posted: September 9, 2013

By Greg Downey

The Bull of Winter weakens

In 2003, after decades of working with the Viliui Sakha, indigenous horse and cattle breeders in the Vilyuy River region of northeastern Siberia, anthropologist Susan Crate began to hear the local people complain about climate change:

My own “ethnographic moment” occurred when I heard a Sakha elder recount the age-old story of Jyl Oghuha (the bull of winter). Jyl Oghuha’s legacy explains the 100o C annual temperature range of Sakha’s subarctic habitat. Sakha personify winter, the most challenging season for them, in the form of a white bull with blue spots, huge horns, and frosty breath. In early December this bull of winter arrives from the Arctic Ocean to hold temperatures at their coldest (-60o to -65o C; -76o to -85o F) for December and January. Although I had heard the story many times before, this time it had an unexpected ending… (Crate 2008: 570)

Lena Pillars, photo by Maarten Takens (CC BY SA)

Lena Pillars, photo by Maarten Takens (CC BY SA)

This Sakha elder, born in 1935, talked about how the bull symbolically collapsed each spring, but also its uncertain future:

The bull of winter is a legendary Sakha creature whose presence explains the turning from the frigid winter to the warming spring. The legend tells that the bull of winter, who keeps the cold in winter, loses his first horn at the end of January as the cold begins to let go to warmth; then his second horn melts off at the end of February, and finally, by the end of March, he loses his head, as spring is sure to have arrived. It seems that now with the warming, perhaps the bull of winter will no longer be. (ibid.)

Crate found that the ‘softening’ of winter disrupted the Sakha way of life in a number of ways far less prosaic. The winters were warmer, bringing more rain and upsetting the haying season; familiar animals grew less common and new species migrated north; more snow fell, making hunting more difficult in winter; and when that snow thawed, water inundated their towns, fields, and countryside, rotting their houses, bogging down farming, and generally making life more difficult. Or, as a Sakha elder put it to Crate:

I have seen two ugut jil (big water years) in my lifetime. One was the big flood in 1959 — I remember canoeing down the street to our kin’s house. The other is now. The difference is that in ‘59 the water was only here for a few days and now it does not seem to be going away. (Sakha elder, 2009; in Crate 2011: 184).

(Currently, Eastern Russia is struggling with unprecedented flooding along the Chinese border, and, in July, unusual forest fires struck areas of the region that were permafrost.) As I write this, the website CO2 Now reports that the average atmospheric CO2 level for July 2013 at the Mauna Loa Observatory was 397.23 parts per million, slightly below the landmark 400+ ppm levels recorded in May. The vast majority of climate scientists now argue, not about whether we will witness anthropogenic atmospheric change, but how much and how quicklythe climate will change. Will we cross potential ‘tipping points’, when feedback dynamics accelerate the pace of warming?

While climate science might be controversial with the public in the US (less so here in Australia and among scientists), the effects on human populations are more poorly understood and unpredictable, both by the public and scientists alike. Following on from Wendy Foden and colleagues’ piece in the PLOS special collection proposing a method to identify the species at greatest risk (Foden et al. 2013), I want to consider how we might identify which cultures are at greatest risk from climate change.

Will climate change threaten human cultural diversity, and if so, which groups will be pushed to the brink most quickly? Are groups like the Viliui Sakha at the greatest risk, especially as we know that climate change is already affecting the Arctic and warming may be exaggerated there? And what about island groups, threatened by sea level changes? Who will have to change most and adapt because of a shifting climate? Daniel Lende (2013: 496) has suggested that anthropologists need to put our special expertise to work in public commentary, and in the area of climate change, these human impacts seem to be one place where that expertise might be most useful.

The Sakha Republic

The Sakha Republic where the Viliui Sakha live is half of Russia’s Far Eastern Federal District, a district that covers an area almost as large as India, twice the size of Alaska. Nevertheless, fewer than one million people live there, spread thinly across the rugged landscape. The region contains the coldest spot on the planet, the Verkhoyansk Range, where the average January temperature —average — is around -50O, so cold that it doesn’t matter whether that’s Fahrenheit or Celsius.

The area that is now the Sakha Republic was first taken control by Tsarist Russia in the seventeenth century, a tax taken from the local people in furs. Many early Russian migrants to the region adopted Sakha customs. Both the Tsars and the later Communist governors exiled criminals to the region, which came to be called Yakutia; after the fall of the Soviet Union, the Russian Federation recognised the Sakha Republic. The Sakha, also called Yakuts, are the largest group in the area today; since the fall of the Soviets, many of the ethnic Russian migrants have left.

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC (BY SA).

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC (BY SA).

Sakha speakers first migrated north into Siberia as reindeer hunters, mixing with and eventually assimilating the Evenki, a Tungus-speaking group that lived there nomadically. Then these nomadic groups were later assimilated or forced further north by more sedentary groups of Sakha who raised horses and practiced more intensive reindeer herding and some agriculture (for more information see Susan Crate’s excellent discussion, ‘The Legacy of the Viliui Reinfeer-Herding Complex’ at Cultural Survival). The later migrants forced those practicing the earlier, nomadic reindeer-herding way of life into the most remote and rugged pockets of the region. By the first part of the twentieth century, Crate reports, the traditional reindeer-herding lifestyle was completely replaced in the Viliui watershed, although people elsewhere in Siberia continued to practice nomadic lifestyles, following herds of reindeer.

Today the economy of the Sakha Republic relies heavily on mining: gold, tin, and especially diamonds. Almost a quarter of all diamonds in the world — virtually all of Russia’s production — comes from Sakha. The great Udachnaya pipe, a diamond deposit just outside the Arctic circle, is now the third deepest open pit mine in the world, extending down more than 600 meters.

A new project promises to build a pipeline to take advantage of the massive Chaynda gas field in Sakha, sending the gas eastward to Vladivostok on Russia’s Pacific coast (story in the Siberia Times). The $24 billion Gazprom pipeline, which President Putin’s office says he wants developed ‘within the tightest possible timescale’, would mean that Russia would not have to sell natural gas exclusively through Europe, opening a line for direct delivery into the Pacific.

The Sakha have made the transition to the post-Soviet era remarkably well, with a robust economy and a political system that seems capable of balancing development and environmental safeguards (Crate 2003). But after successfully navigating a political thaw, will the Sakha, and other indigenous peoples of the region, fall victim to a much more literal warming?

The United Nations on indigenous people and climate change

This past month, we celebrated the International Day of the World’s Indigenous People (9 August). From 2005 to 2014, the United Nations called for ‘A Decade for Action and Dignity.’ The focus of this year’s observance is ‘Indigenous peoples building alliances: Honouring treaties, agreements and other constructive arrangements’ (for more information, here’s the UN’s website). According to the UN Development Programme, the day ‘presents an opportunity to honour diverse indigenous cultures and recognize the achievements and valuable contributions of an estimated 370 million indigenous peoples.’

The UN has highlighted the widespread belief that climate change will be especially cruel to indigenous peoples:

Despite having contributed the least to GHG [green house gas], indigenous peoples are the ones most at risk from the consequences of climate change because of their dependence upon and close relationship with the environment and its resources. Although climate change is regionally specific and will be significant for indigenous peoples in many different ways, indigenous peoples in general are expected to be disproportionately affected. Indigenous communities already affected by other stresses (such as, for example, the aftermath of resettlement processes), are considered especially vulnerable. (UN 2009: 95)

The UN’s report, State of the World’s Indigenous People, goes on to cite the following specific ‘changes or even losses in the biodiversity of their environment’ for indigenous groups, that will directly threaten aspects of indigenous life:

  • the traditional hunting, fishing and herding practices of indigenous peoples, not only in the Arctic, but also in other parts of the world;

  • the livelihood of pastoralists worldwide;

  • the traditional agricultural activities of indigenous peoples living in mountainous regions;

  • the cultural and ritual practices that are not only related to specific species or specific annual cycles, but also to specific places and spiritual sites, etc.;

  • the health of indigenous communities (vector-borne diseases, hunger, etc.);

  • the revenues from tourism. (ibid.: 96)

For example, climate change has been linked to extreme drought in Kenya where the Maasai, pastoral peoples, find their herds shrinking and good pasture harder and harder to find. For the Kamayurá in the Xingu region of Brazil, less rain and warmer water have decimated fish stocks in their river and made cassava cultivation a hit and miss affair; children are reduced to eating ants on flatbread to stave off hunger.

The UN report touches on a number of different ecosystems where the impacts of climate change will be especially severe, singling out the Arctic:

The Arctic region is predicted to lose whole ecosystems, which will have implications for the use, protection and management of wildlife, fisheries, and forests, affecting the customary uses of culturally and economically important species and resources. Arctic indigenous communities—as well as First Nations communities in Canada—are already experiencing a decline in traditional food sources, such as ringed seal and caribou, which are mainstays of their traditional diet. Some communities are being forced to relocate because the thawing permafrost is damaging the road and building infrastructure. Throughout the region, travel is becoming dangerous and more expensive as a consequence of thinning sea ice, unpredictable freezing and thawing of rivers and lakes, and the delay in opening winter roads (roads that can be used only when the land is frozen). (ibid.: 97)

Island populations are also often pointed out as being on the sharp edge of climate change (Lazrus 2012). The award-winning film, ‘There Once Was an Island,’ focuses on a community in the Pacific at risk from a rise in the sea level. As a website for the film describes:

Takuu, a tiny atoll in Papua New Guinea, contains the last Polynesian culture of its kind.  Facing escalating climate-related impacts, including a terrifying flood, community members Teloo, Endar, and Satty, take us on an intimate journey to the core of their lives and dreams. Will they relocate to war-ravaged Bougainville – becoming environmental refugees – or fight to stay? Two visiting scientists investigate on the island, leading audience and community to a greater understanding of climate change.

Similarly, The Global Mail reported the island nation of Kiribati was likely to become uninhabitable in coming decades, not simply because the islands flood but because patterns of rainfall shift and seawater encroaches on the coastal aquifer, leaving wells saline and undrinkable.

Heather Lazrus (2012: 288) reviews a number of other cases:

Low-lying islands and coastal areas such as the Maldives; the Marshall Islands; the Federated States of Micronesia, Kiribati, and Tuvalu; and many arctic islands such as Shishmaref… and the small islands in Nunavut… may be rendered uninhabitable as sea levels rise and freshwater resources are reduced.

Certainly, the evidence from twentieth century cases in which whole island populations were relocated suggests that the move can be terribly disruptive, the social damage lingering long after suitcases are unpacked.

Adding climate injury to cultural insult

In fact, even before average temperatures climbed or sea levels rose, indigenous groups were already at risk and have been for a while. By nearly every available measure, indigenous peoples’ distinctive lifeways and the globe’s cultural diversity are threatened, not so much by climate, but by their wealthier, more technologically advanced neighbours, who often exercise sovereignty over them.

If we take language diversity as an index of cultural distinctiveness, for example, linguist Howard Krauss (1992: 4) warned in the early 1990s that a whole range of languages were either endangered or ‘moribund,’ no longer being learned by new speakers or young people. These moribund languages, Krauss pointed out, would inevitably die with a speaker who had already been born, an individual who would someday be unable to converse in that language because there would simply be no one else to talk to:

The Eyak language of Alaska now has two aged speakers; Mandan has 6, Osage 5, Abenaki-Penobscot 20, and Iowa has 5 fluent speakers. According to counts in 1977, already 13 years ago, Coeur d’Alene had fewer than 20, Tuscarora fewer than 30, Menomini fewer than 50, Yokuts fewer than 10. On and on this sad litany goes, and by no means only for Native North America. Sirenikski Eskimo has two speakers, Ainu is perhaps extinct. Ubykh, the Northwest Caucasian language with the most consonants, 80-some, is nearly extinct, with perhaps only one remaining speaker. (ibid.)

Two decades ago, Krauss went on to estimate that 90% of the Arctic indigenous languages were ‘moribund’; 80% of the Native North American languages; 90% of Aboriginal Australian languages (ibid.: 5). Although the estimate involved a fair bit of guesswork, and we have seen some interesting evidence of ‘revivals’, Krauss suggested that 50% of all languages on earth were in danger of disappearing.

The prognosis may not be quite as grim today, but the intervening years have confirmed the overall pattern. Just recently, The Times of India reported that the country has lost 20% of its languages since 1961 — 220 languages disappeared in fifty years, with the pace accelerating. The spiffy updated Ethnologue website, based upon a more sophisticated set of categories and more precise accounting, suggests that, of the 7105 languages that they recognise globally, around 19% are ‘moribund’ or in worse shape, while another 15% are shrinking but still being taught to new users (see Ethnologue’s discussion of language status here  and UNESCO’s interactive atlas of endangered languages).

Back in 2010, I argued that the disappearance of languages was a human rights issue, not simply the inevitable by-product of cultural ‘evolution’, economic motivations, and globalisation (‘Language extinction ain’t no big thing?’ – butbeware as my style of blogging has changed a lot since then). Few peoples voluntarily forsake their mother tongues; the disappearance of a language or assimilation of a culture is generally not a path strode by choice, but a lessor-of-evils choice when threatened with chronic violence, abject poverty, and marginalisation.

I’ve also written about the case of ‘uncontacted’ Indians on the border of Brazil and Peru, where Western observers sometimes assume that indigenous peoples assimilate because they seek the benefits of ‘modernization’ when, in fact, they are more commonly the victims of exploitation and violent displacement. Just this June, a group of Mashco-Piro, an isolated indigenous group in Peru that has little contact with other societies, engaged in a tense stand-off at the Las Piedras river, a tributary of the Amazon. Caught on video, they appeared to be trying to contact or barter with local Yine Indians at a ranger station. Not only have this group of the Mashco-Piro fought in previous decades with loggers, but they now find that low-flying planes are disturbing their territory in search of natural gas and oil. (Globo Brasil also released footage taken in 2011 by officials from Brazil’s National Indian Foundation, FUNAI, of the Kawahiva, also called the Rio Pardo Indians, an isolated group from Mato Grosso state.)

In 1992, Krauss pleaded with fellow scholars to do something about the loss of cultural variation, lest linguistics ‘go down in history as the only science that presided obliviously over the disappearance of 90% of the very field to which it is dedicated’ (1992: 10):

Surely, just as the extinction of any animal species diminishes our world, so does the extinction of any language. Surely we linguists know, and the general public can sense, that any language is a supreme achievement of a uniquely human collective genius, as divine and endless a mystery as a living organism. Should we mourn the loss of Eyak or Ubykh any less than the loss of the panda or California condor? (ibid.: 8)

The pace of extinction is so quick that some activists, like anthropologist and attorney David Lempert (2010), argue that our field needs to collaborate on the creation of a cultural ‘Red Book,’ analogous to the Red Book for Endangered Species. Anthropologists may fight over the theoretical consequences of reifying cultures, but the political and legal reality is that even states with laws on the books to protect cultural diversity often have no clear guidelines as to what that entails.

But treating cultures solely as fragile victims of climate change misrepresents how humans will adapt to climate change. Culture is not merely a fixed tradition, calcified ‘customs’ at risk from warming; culture is also out adaptive tool, the primary way in which our ancestors adapted to such a great range of ecological niches in the first place and we will continue to adapt into the future. And this is not the first time that indigenous groups have confronted climate change.

Culture as threatened, culture as adaptation

One important stream of research in the anthropology of climate change shows very clearly that indigenous cultures are quite resilient in the face of environmental change. Anthropologist Sarah Strauss of the University of Wyoming has cautioned that, if we only focus on cultural extinction from climate change as a threat, we may miss the role of culture in allowing people toaccommodate wide variation in the environment:

People are extraordinarily resilient. Our cultures have allowed human groups to colonize the most extreme reaches of planet Earth, and no matter where we have gone, we have contended with both environmental and social change…. For this reason, I do not worry that the need to adapt to new and dramatic environmental changes (those of our own making, as well as natural occurrences like volcanoes) will drive cultures—even small island cultures—to disappear entirely.  (Strauss 2012: n.p. [2])

A number of ethnographic cases show how indigenous groups can adapt to severe climatic shifts. Crate (2008: 571), for example, points out that the Sakha adapted to a major migration northward, transforming a Turkic culture born in moderate climates to suit their new home. Kalaugher (2010) also discusses the Yamal Nenets, another group of Siberian nomads, who adapted to both climate change and industrial encroachment, including the arrival of oil and gas companies that fouled waterways and degraded their land (Survival International has a wonderful photo essay about the Yamal Nenets here.). A team led by Bruce Forbes of the University of Lapland, Finland, found:

The Nenet have responded by adjusting their migration routes and timing, avoiding disturbed and degraded areas, and developing new economic practices and social interaction, for example by trading with workers who have moved into gas villages in the area. (article here)

Northeast Science Station, Cherskiy, Sakha Republic, by David Mayer, CC (BY NC SA).

Northeast Science Station, Cherskiy, Sakha Republic, by David Mayer, CC (BY NC SA).

But one of the most amazing stories about the resilience and adaptability of the peoples of the Arctic comes from Wade Davis, anthropologist and National Geographic ‘explorer in residence.’ In his wonderful TED presentation, ‘Dreams from endangered cultures,’ Davis tells a story he heard on a trip to the northern tip of Baffin Island, Canada:

…this man, Olayuk, told me a marvelous story of his grandfather. The Canadian government has not always been kind to the Inuit people, and during the 1950s, to establish our sovereignty, we forced them into settlements. This old man’s grandfather refused to go. The family, fearful for his life, took away all of his weapons, all of his tools. Now, you must understand that the Inuit did not fear the cold; they took advantage of it. The runners of their sleds were originally made of fish wrapped in caribou hide. So, this man’s grandfather was not intimidated by the Arctic night or the blizzard that was blowing. He simply slipped outside, pulled down his sealskin trousers and defecated into his hand. And as the feces began to freeze, he shaped it into the form of a blade. He put a spray of saliva on the edge of the shit knife, and as it finally froze solid, he butchered a dog with it. He skinned the dog and improvised a harness, took the ribcage of the dog and improvised a sled, harnessed up an adjacent dog, and disappeared over the ice floes, shit knife in belt. Talk about getting by with nothing.

… and there’s nothing more than I can say after ‘… and disappeared over the ice floes, shit knife in belt’ that can make this story any better…

Climate change in context

The problem for many indigenous cultures is not climate change alone or in isolation, but the potential speed of that change and how it interacts with other factors, many human-induced: introduced diseases, environmental degradation, deforestation and resource depletion, social problems such as substance abuse and domestic violence, and legal systems imposed upon them, including forced settlement and forms of property that prevent movement. As Strauss explains:

Many researchers… see climate change not as a separate problem, in fact, but rather as an intensifier, which overlays but does not transcend the rest of the challenges we face; it is therefore larger in scale and impact, perhaps, but not entirely separable from the many other environmental and cultural change problems already facing human societies. (Strauss 2012: n.p. [2])

One of the clearest examples of these intensifier effects is the way in which nomadic peoples, generally quite resilient, lose their capacity to adapt when they are prevented from moving. The Siberian Yamal Nenets makes this clear:

“We found that free access to open space has been critical for success, as each new threat has arisen, and that institutional constraints and drivers were as important as the documented ecological changes,” said Forbes. “Our findings point to concrete ways in which the Nenets can continue to coexist as their lands are increasingly fragmented by extensive natural gas production and a rapidly warming climate.” (Kalaugher 2010)

With language loss in India, it’s probably no coincidence that, ‘Most of the lost languages belonged to nomadic communities scattered across the country’ (Times of India).

In previous generations, if climate changed, nomadic groups might have migrated to follow familiar resources or adopt techniques from neighbours who had already adapted to forces novel to them. An excellent recent documentary series on the way that Australian Aboriginal people have adapted to climate change on our continent — the end of an ice age, the extinction of megafauna, wholesale climate change including desertification — is a striking example (the website for the series, First Footprints, is excellent).

Today, migration is treated by UN officials and outsiders as ‘failure to adapt’, as people who move fall under the new rubric of ‘climate refugees’ (Lazrus 2012: 293). Migration, instead of being recognised as an adaptive strategy, is treated as just another part of the diabolical problem. (Here in Australia, where refugees on boats trigger unmatched political hysteria, migration from neighbouring areas would be treated as a national security problem rather than an acceptable coping strategy.)

For the most part, the kind of migration that first brought the Viliui Sakha to northeastern Siberia is no longer possible. As the Yamal Nenets, for example, migrate with their herds of reindeer, the come across the drills, pipelines, and even the Obskaya-Bovanenkovo railway – the northern-most railway line in the world – all part of Gazprom’s ‘Yamal Megaproject.’  Endangered indigenous groups are hemmed in on all sides, surviving only in geographical niches that were not attractive to their dominant neighbours, unsuitable for farming. AsElisabeth Rosenthal wrote in The New York Times:

Throughout history, the traditional final response for indigenous cultures threatened by untenable climate conditions or political strife was to move. But today, moving is often impossible. Land surrounding tribes is now usually occupied by an expanding global population, and once-nomadic groups have often settled down, building homes and schools and even declaring statehood.

For the Kamayurá, for example, eating ants instead of fish in Brazil’s Xingu National Park, they are no longer surrounded by the vast expanse of the Amazon and other rivers where they might still fish; the park is now surrounded by ranches and farms, some of which grow sugarcane to feed Brazil’s vast ethanol industry or raise cattle to feed the world’s growing appetite for beef.

Now, some of these indigenous groups find themselves squarely in the path of massive new resource extraction projects with nowhere to go, whether that’s in northern Alberta, eastern Peru, Burma, or remnant forests in Indonesia. That is, indigenous peoples have adapted before to severe climate change; but how much latitude (literally) do these groups now have to adapt if we do not allow them to move?

In sum, indigenous people are often not directly threatened by climate change alone; rather, they are pinched between climate change and majority cultures who want Indigenous peoples’ resources while also preventing them from adapting in familiar ways. The irony is that the dynamic driving climate change is attacking them from two sides: the forests that they need, the mountains where they keep their herds, and the soil under the lands where they live are being coveted for the very industrial processes that belch excess carbon into the atmosphere.

It’s hard not to be struck by the bitter tragedy that, in exchange for the resources to which we are addicted, we offer them assimilation. If they get out of the way so that we can drill out the gas or oil under their land or take their forests, we will invite them in join in our addiction (albeit, as much poorer addicts on the fringes of our societies, if truth be told). They have had little say in the process, or in our efforts to mitigate the process. We assume that our technologies and ways of life are the only potential cure for the problems created by these very technologies and ways of life.

In 2008, for example, Warwick Baird, Director of the Native Title Unit of the Australian Human Rights and Equal Opportunity Commission, warned that the shift to an economic mode of addressing climate change abatement threatened to further sideline indigenous people:

Things are moving fast in the world of climate change policy and the urgency is only going to get greater. Yet Indigenous peoples, despite their deep engagement with the land and waters, it seems to me, have little engagement with the formulation of climate change policy little engagement in climate change negotiations ­ and little engagement in developing and applying mitigation and adaptation strategies. They have not been included. Human rights have not been at the forefront. (transcript of speech here)

The problem then is not that indigenous populations are especially fragile or unable to adapt; in fact, both human prehistory and history demonstrate that these populations are remarkably resilient. Rather, many of these populations have been pushed to the brink, forced to choose between assimilation or extinction by the unceasing demands of the majority cultures they must live along side. The danger is not that the indigenous will fall off the precipice, but rather that the flailing attempts of the resource-thirsty developed world toavoid inevitable culture change — the necessary move away from unsustainable modes of living — will push much more sustainable lifeways over the edge into the abyss first.

Links

Inuit Knowledge and Climate Change (54:07 documentary).
Isuma TV, network of Inuit and Indigenous media producers.

Inuit Perspectives on Recent Climate Change, Skeptical Science, by Caitlyn Baikie, an Inuit geography student at Memorial University of Newfoundland.

Images

The Lena Pillars by Maarten Takens, CC licensed (BY SA). Original at Flickr:http://www.flickr.com/photos/takens/8512871877/

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC licensed (BY SA). Original at Flickr:http://www.flickr.com/photos/35742910@N05/8582017913/in/photolist-e5n5W6-dVQQHP-dYfGe4-dWzA7s-dW8maK-89CRTd-89zppv-7yp2ht-8o9NBd-89CBRs-dWNM2R-8SLQrQ

Northeast Science Station in late July 2012. Cherskiy, Sakha Republic, Russia, by David Mayer, CC licensed (BY NC SA). Original at Flickr:http://www.flickr.com/photos/56778570@N02/8760624135/in/photolist-em9ukH-dSXQnN

References

Crate, S. A. 2003. Co-option in Siberia: The Case of Diamonds and the Vilyuy Sakha. Polar Geography 26(4): 289–307. doi: 10.1080/789610151 (pdf available here)

Crate, S. 2008. Gone the Bull of Winter? Grappling with the Cultural Implications of and Anthropology’s Role(s) in Global Climate Change. Current Anthropology 49(4): 569-595. doi: 10.1086/529543. Stable URL:http://www.jstor.org/stable/10.1086/529543

Crate, S. 2011. Climate and Culture: Anthropology in the Era of Contemporary Climate Change. Annual Review of Anthropology 40:175–94. doi:10.1146/annurev.anthro.012809.104925 (pdf available here)

Cruikshank, J. 2001. Glaciers and Climate Change: Perspectives from Oral Tradition. Arctic 54(4): 377-393. Article Stable URL:http://www.jstor.org/stable/40512394

Foden WB, Butchart SHM, Stuart SN, Vié J-C, Akçakaya HR, et al. (2013) Identifying the World’s Most Climate Change Vulnerable Species: A Systematic Trait-Based Assessment of all Birds, Amphibians and Corals. PLoS ONE 8(6): e65427. doi:10.1371/journal.pone.0065427

Kalaugher L. 2010. Learning from Siberian Nomads’ Resilience. Bristol, UK: Environ. Res. Web.http://environmentalresearchweb.org/cws/article/news/41363

Krauss, M. 1992. The world’s languages in crisis. Language 68(1): 4-10. (pdf available here)

Lazrus, H. 2012. Sea Change: Island Communities and Climate Change. Annu. Rev. Anthropology 41:285–301. doi: 10.1146/annurev-anthro-092611-145730

Lempert, D. 2010. Why We Need a Cultural Red Book for Endangered Cultures, NOW: How Social Scientists and Lawyers/ Rights Activists Need to Join Forces.International Journal on Minority and Group Rights 17: 511–550. doi: 10.1163/157181110X531420

Lende, D. H. 2013. The Newtown Massacre and Anthropology’s Public Response. American Anthropologist 115 (3): 496–501. doi:10.1111/aman.12031

Strauss, S. 2012. Are cultures endangered
by climate change? Yes, but. . . WIREs Clim Change. doi: 10.1002/wcc.181 (pdf available here)

United Nations. 2009. The State of the World’s Indigenous People. Department of Economic and Social Affairs, ST/ESA/328. United Nations Publications: New York. (available online as a pdf)

Fukushima Forever (Huff Post)

Charles Perrow

Posted: 09/20/2013 2:49 pm

Recent disclosures of tons of radioactive water from the damaged Fukushima reactors spilling into the ocean are just the latest evidence of the continuing incompetence of the Japanese utility, TEPCO. The announcement that the Japanese government will step in is also not reassuring since it was the Japanese government that failed to regulate the utility for decades. But, bad as it is, the current contamination of the ocean should be the least of our worries. The radioactive poisons are expected to form a plume that will be carried by currents to coast of North America. But the effects will be small, adding an unfortunate bit to our background radiation. Fish swimming through the plume will be affected, but we can avoid eating them.

Much more serious is the danger that the spent fuel rod pool at the top of the nuclear plant number four will collapse in a storm or an earthquake, or in a failed attempt to carefully remove each of the 1,535 rods and safely transport them to the common storage pool 50 meters away. Conditions in the unit 4 pool, 100 feet from the ground, are perilous, and if any two of the rods touch it could cause a nuclear reaction that would be uncontrollable. The radiation emitted from all these rods, if they are not continually cool and kept separate, would require the evacuation of surrounding areas including Tokyo. Because of the radiation at the site the 6,375 rods in the common storage pool could not be continuously cooled; they would fission and all of humanity will be threatened, for thousands of years.

Fukushima is just the latest episode in a dangerous dance with radiation that has been going on for 68 years. Since the atomic bombing of Nagasaki and Hiroshima in 1945 we have repeatedly let loose plutonium and other radioactive substances on our planet, and authorities have repeatedly denied or trivialized their dangers. The authorities include national governments (the U.S., Japan, the Soviet Union/ Russia, England, France and Germany); the worldwide nuclear power industry; and some scientists both in and outside of these governments and the nuclear power industry. Denials and trivialization have continued with Fukushima. (Documentation of the following observations can be found in my piece in the Bulletin of the Atomic Scientists, upon which this article is based.) (Perrow 2013)

In 1945, shortly after the bombing of two Japanese cities, the New York Times headline read: “Survey Rules Out Nagasaki Dangers”; soon after the 2011 Fukushima disaster it read “Experts Foresee No Detectable Health Impact from Fukushima Radiation.” In between these two we had experts reassuring us about the nuclear bomb tests, plutonium plant disasters at Windscale in northern England and Chelyabinsk in the Ural Mountains, and the nuclear power plant accidents at Three Mile Island in the United States and Chernobyl in what is now Ukraine, as well as the normal operation of nuclear power plants.

Initially the U.S. Government denied that low-level radiation experienced by thousands of Japanese people in and near the two cities was dangerous. In 1953, the newly formed Atomic Energy Commission insisted that low-level exposure to radiation “can be continued indefinitely without any detectable bodily change.” Biologists and other scientists took exception to this, and a 1956 report by the National Academy of Scientists, examining data from Japan and from residents of the Marshall Islands exposed to nuclear test fallout, successfully established that all radiation was harmful. The Atomic Energy Commission then promoted a statistical or population approach that minimized the danger: the damage would be so small that it would hardly be detectable in a large population and could be due to any number of other causes. Nevertheless, the Radiation Research Foundation detected it in 1,900 excess deaths among the Japanese exposed to the two bombs. (The Department of Homeland Security estimated only 430 cancer deaths).

Besides the uproar about the worldwide fallout from testing nuclear weapons, another problem with nuclear fission soon emerged: a fire in a British plant making plutonium for nuclear weapons sent radioactive material over a large area of Cumbria, resulting in an estimated 240 premature cancer deaths, though the link is still disputed. The event was not made public and no evacuations were ordered. Also kept secret, for over 25 years, was a much larger explosion and fire, also in 1957, at the Chelyabinsk nuclear weapons processing plant in the eastern Ural Mountains of the Soviet Union. One estimate is that 272,000 people were irradiated; lakes and streams were contaminated; 7,500 people were evacuated; and some areas still are uninhabitable. The CIA knew of it immediately, but they too kept it secret. If a plutonium plant could do that much damage it would be a powerful argument for not building nuclear weapons.

Powerful arguments were needed, due to the fallout from the fallout from bombs and tests. Peaceful use became the mantra. Project Plowshares, initiated in 1958, conducted 27 “peaceful nuclear explosions” from 1961 until the costs as well as public pressure from unforeseen consequences ended the program in 1975. The Chairman of the Atomic Energy Commission indicated Plowshares’ close relationship to the increasing opposition to nuclear weapons, saying that peaceful applications of nuclear explosives would “create a climate of world opinion that is more favorable to weapons development and tests” (emphasis supplied). A Pentagon official was equally blunt, saying in 1953, “The atomic bomb will be accepted far more readily if at the same time atomic energy is being used for constructive ends.” The minutes of a National Security Council in 1953 spoke of destroying the taboo associated with nuclear weapons and “dissipating” the feeling that we could not use an A-bomb.

More useful than peaceful nuclear explosions were nuclear power plants, which would produce the plutonium necessary for atomic weapons as well as legitimating them. Nuclear power plants, the daughter of the weapons program — actually its “bad seed” –f was born and soon saw first fruit with the1979 Three Mile Island accident. Increases in cancer were found but the Columbia University study declared that the level of radiation from TMI was too low to have caused them, and the “stress” hypothesis made its first appearance as the explanation for rises in cancer. Another university study disputed this, arguing that radiation caused the increase, and since a victim suit was involved, it went to a Federal judge who ruled in favor of stress. A third, larger study found “slight” increases in cancer mortality and increased risk breast and other cancers, but found “no consistent evidence” of a “significant impact.” Indeed, it would be hard to find such an impact when so many other things can cause cancer, and it is so widespread. Indeed, since stress can cause it, there is ample ambiguity that can be mobilized to defend nuclear power plants.

Ambiguity was mobilized by the Soviet Union after the 1987 Chernobyl disaster. Medical studies by Russian scientists were suppressed, and doctors were told not to use the designation of leukemia in health reports. Only after a few years had elapsed did any serious studies acknowledge that the radiation was serious. The Soviet Union forcefully argued that the large drops in life expectancy in the affected areas were due to not just stress, but lifestyle changes. The International Atomic Energy Association (IAEA), charged with both promoting nuclear power and helping make it safe, agreed, and mentioned such things as obesity, smoking, and even unprotected sex, arguing that the affected population should not be treated as “victims” but as “survivors.” The count of premature deaths has varied widely, ranging from 4,000 in the contaminated areas of Ukraine, Belarus and Russia from UN agencies, while Greenpeace puts it at 200,000. We also have the controversial worldwide estimate of 985,000 from Russian scientists with access to thousands of publications from the affected regions.

Even when nuclear power plants are running normally they are expected to release some radiation, but so little as to be harmless. Numerous studies have now challenged that. When eight U.S. nuclear plants in the U.S. were closed in 1987 they provided the opportunity for a field test. Two years later strontium-90 levels in local milk declined sharply, as did birth defects and death rates of infants within 40 miles of the plants. A 2007 study of all German nuclear power plants saw childhood leukemia for children living less than 3 miles from the plants more than double, but the researchers held that the plants could not cause it because their radiation levels were so low. Similar results were found for a French study, with a similar conclusion; it could not be low-level radiation, though they had no other explanation. A meta-study published in 2007 of 136 reactor sites in seven countries, extended to include children up to age 9, found childhood leukemia increases of 14 percent to 21 percent.

Epidemiological studies of children and adults living near the Fukushima Daiichi nuclear plant will face the same obstacles as earlier studies. About 40 percent of the aging population of Japan will die of some form of cancer; how can one be sure it was not caused by one of the multiple other causes? It took decades for the effects of the atomic bombs and Chernobyl to clearly emblazon the word “CANCER” on these events. Almost all scientists finally agree that the dose effects are linear, that is, any radiation added to natural background radiation, even low-levels of radiation, is harmful. But how harmful?

University professors have declared that the health effects of Fukushima are “negligible,” will cause “close to no deaths,” and that much of the damage was “really psychological.” Extensive and expensive follow-up on citizens from the Fukushima area, the experts say, is not worth it. There is doubt a direct link will ever be definitively made, one expert said. The head of the U.S. National Council on Radiation Protection and Measurements, said: “There’s no opportunity for conducting epidemiological studies that have any chance of success….The doses are just too low.” We have heard this in 1945, at TMi, at Chernobyl, and for normally running power plants. It is surprising that respected scientists refuse to make another test of such an important null hypothesis: that there are no discernible effects of low-level radiation.

Not surprisingly, a nuclear power trade group announced shortly after the March, 2011 meltdown at Fukushima (the meltdown started with the earthquake, well before the tsunami hit), that “no health effects are expected” as a result of the events. UN agencies agree with them and the U.S. Council. The leading UN organization on the effects of radiation concluded “Radiation exposure following the nuclear accident at Fukushima-Daiichi did not cause any immediate health effects. It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers.” The World Health Organization stated that while people in the United States receive about 6.5 millisieverts per year from sources including background radiation and medical procedures, only two Japanese communities had effective dose rates of 10 to 50 millisieverts, a bit more than normal.

However, other data contradict the WHO and other UN agencies. The Japanese science and technology ministry (MEXT) indicated that a child in one community would have an exposure 100 times the natural background radiation in Japan, rather than a bit more than normal. A hospital reported that more than half of the 527 children examined six months after the disaster had internal exposure to cesium-137, an isotope that poses great risk to human health. A French radiological institute found ambient dose rates 20 to 40 times that of background radiation and in the most contaminated areas the rates were even 10 times those elevated dose rates. The Institute predicts and excess cancer rate of 2 percent in the first year alone. Experts not associated with the nuclear industry or the UN agencies currently have estimated from 1,000 to 3,000 cancer deaths. Nearly two years after the disaster the WHO was still declaring that any increase in human disease “is likely to remain below detectable levels.” (It is worth noting that the WHO still only releases reports on radiation impacts in consultation with the International Atomic Energy Agency.)

In March 2013, the Fukushima Prefecture Health Management Survey reported examining 133,000 children using new, highly sensitive ultrasound equipment. The survey found that 41 percent of the children examined had cysts of up to 2 centimeters in size and lumps measuring up to 5 millimeters on their thyroid glands, presumably from inhaled and ingested radioactive iodine. However, as we might expect from our chronicle, the survey found no cause for alarm because the cysts and lumps were too small to warrant further examination. The defense ministry also conducted an ultrasound examination of children from three other prefectures distant from Fukushima and found somewhat higher percentages of small cysts and lumps, adding to the argument that radiation was not the cause. But others point out that radiation effects would not be expected to be limited to what is designated as the contaminated area; that these cysts and lumps, signs of possible thyroid cancer, have appeared alarmingly soon after exposure; that they should be followed up since it takes a few years for cancer to show up and thyroid cancer is rare in children; and that a control group far from Japan should be tested with the same ultrasound technics.

The denial that Fukushima has any significant health impacts echoes the denials of the atomic bomb effects in 1945; the secrecy surrounding Windscale and Chelyabinsk; the studies suggesting that the fallout from Three Mile Island was, in fact, serious; and the multiple denials regarding Chernobyl (that it happened, that it was serious, and that it is still serious).

As of June, 2013, according to a report in The Japan Times, 12 of 175,499 children tested had tested positive for possible thyroid cancer, and 15 more were deemed at high risk of developing the disease. For a disease that is rare, this is high number. Meanwhile, the U.S. government is still trying to get us to ignore the bad seed. June 2012, the U.S. Department of Energy granted $1.7 million to the Massachusetts Institute of Technology to address the “difficulties in gaining the broad social acceptance” of nuclear power.

Perrow, Charles. 2013. “Nuclear denial: From Hiroshima to Fukushima.” Bulletin of Atomic Scientists 69(5):56-67.

Mais sobre a polêmica dos animais de laboratório (25/10/2013)

Testes em animais: questão humanitária (Jornal da Ciência)

25 de outubro de 2013

Comunidade científica defende a experimentação com animais e rechaça invasão do Instituto Royal

Primeiro pesquisador a desenvolver uma vacina antirrábica, Louis Pasteur (1827-1895) contribuiu enormemente na validação de métodos científicos com testes em animais; Carlos Chagas (1878-1934) fez experiências com saguis e insetos em seus estudos sobre a malária e na descoberta da doença de Chagas; a vacina contra a poliomielite só foi possível graças a pesquisas que Albert Sabin (1906-1993) fez em dezenas de macacos. Em comum, esses três cientistas têm, além do renome internacional, o fato de terem entrado para a história pela grande contribuição ao avanço da ciência para o benefício da humanidade.

Ao contrário, o clamor causado pela invasão do Instituto Royal, em São Roque, a 59 quilômetros de São Paulo, na madrugada do dia 18 de outubro, encara pesquisadores da mesma forma que vê torturadores ou traficantes de animais. Experimentação científica não é farra do boi, briga de galo ou tourada para que cientistas sejam tratados pela opinião pública como criminosos.

Durante a invasão, ativistas contrários à utilização de animais em pesquisas científicas levaram do instituto 178 cães da raça beagle e sete coelhos, deixando para trás centenas de ratos. “É um grande equívoco, irresponsabilidade e desconhecimento da realidade ir para a mídia afirmar que os animais não são mais necessários para a descoberta de novas vacinas, medicamentos e terapias”, alertou Renato Cordeiro, professor titular da Fiocruz (Fundação Oswaldo Cruz).

Em carta aberta divulgada no último dia 22, a Sociedade Brasileira para o Progresso da Ciência (SBPC) e a Academia Brasileira de Ciências (ABC) lembram da importância da experimentação com animais. “Na história da medicina mundial, descobertas fundamentais foram realizadas, milhões de mortes evitadas e expectativas de vida aumentadas, graças à utilização dos animais em pesquisas para a saúde humana e animal”, diz o texto assinado pelos presidentes das entidades, Helena Nader e Jacob Palis, respectivamente.

Renato Cordeiro citou algumas dessas descobertas: o controle de qualidade de vacinas contra a pólio, o sarampo, a difteria, o tétano, a hepatite, a febre amarela e a meningite foram possíveis a partir desse tipo de experimentação. “Testes com animais também foram essenciais para a descoberta de anestésicos, de antibióticos e dos anti-inflamatórios, de fármacos para o controle da hipertensão arterial e diabetes”, relacionou, lembrando ainda de medicamentos para controlar a dor, a asma, para tratamento da ansiedade, dos antidepressivos, dos quimioterápicos, e dos hormônios anticoncepcionais.

Mais do que isso, os próprios animais têm sido beneficiados com os avanços da ciência no campo da terapêutica e cirurgia experimental. O pesquisador destaca as vacinas para a raiva, a cinomose, a febre aftosa, as pesquisas com o vírus da imunodeficiência felina, a tuberculose e várias doenças infecto-parasitárias.

Outra associação de pesquisadores, a Federação de Sociedades de Biologia Experimental (FeSBE), também divulgou manifesto expressando repúdio à invasão do Instituto Royal. De acordo com o texto, a sociedade quer que a qualidade de vida e a saúde animal evoluam no mesmo ritmo. “A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos”, diz o manifesto.

No mesmo sentido, Cordeiro cita trabalhos que estão sendo desenvolvidos em laboratórios brasileiros. “Eles visam à descoberta de vacinas e medicamentos para a malária, a Aids, dengue, tuberculose e outras doenças. Poderíamos dizer que os animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta”, argumenta.

Embora técnicas sofisticadas e equipamentos com alta tecnologia sejam necessários para algumas dessas pesquisas, o uso de animais de laboratório ainda é necessário para sua execução. “Em virtude da complexidade da célula biológica”, explica. Pesquisadores já desenvolvem esforços na busca de métodos alternativos para que algum dia os animais não sejam mais necessários nesse processo. No entanto, somente em alguns poucos casos, a Biologia Celular e Molecular oferece essa possibilidade. “Através de técnicas de cultura de tecidos e simulações computacionais”, esclarece.

Bem-estar animal na ciência

Num ponto, cientistas e ativistas invasores concordam: os animais não devem sofrer. Como ainda não existem métodos capazes de substituir o teste em animais em uma série de pesquisas fundamentais para o futuro da humanidade e para a saúde e sobrevivência do ser humano, o que está sendo feito em vários países é a regulamentação e a fiscalização dessas ações para minimizar o sofrimento dos bichos e avaliar a relevância dos estudos para a humanidade.

No Brasil, o responsável por estabelecer essas normas é o Conselho Nacional de Controle de Experimentação Animal (Concea), órgão integrante do Ministério da Ciência, Tecnologia e Inovação, do qual Cordeiro foi o primeiro coordenador. Hoje, o Concea é coordenado por Marcelo Morales, um dos secretários da SBPC e também da FeSBE.

Um grande marco ocorreu com a aprovação da Lei Arouca (11.794/2008), que regulamentou a criação e utilização de animais em atividades de ensino e pesquisa científica. Além de ter criado o Concea, a nova lei obrigou as Instituições de Pesquisa a constituírem uma Comissão de Ética no Uso de Animais (Ceua).

Essas comissões são componentes essenciais para aprovação, controle e vigilância das atividades de criação, ensino e pesquisa científica com animais, bem como para garantir o cumprimento das normas de controle da experimentação animal. “As Ceuas representam uma grande mudança de cultura na ciência e são formadas por médicos veterinários e biólogos, docentes e pesquisadores na área específica da pesquisa científica, e um representante de sociedades protetoras de animais legalmente constituídas e estabelecidas no país”, diz o ex-coordenador do Concea.

Esses representantes têm atuação importante nesse processo. “São profissionais muito qualificados, com formação em nível de doutorado, e têm dado excelentes contribuições nas discussões e deliberações do Concea”, avalia Cordeiro. Considerada a bíblia dos laboratórios de pesquisa, a Diretriz Brasileira para o Cuidado e a Utilização de Animais para fins Científicos e Didáticos (DBCA) foi citada pelo pesquisador como um dos recentes exemplos de competência dos membros do colegiado.

(Mario Nicoll / Jornal da Ciência)

*   *   *

Cirurgia em porco acirra debate do uso de cobaias em experimentos (Correio Braziliense)

Ativistas invadem aula de medicina da PUC-Campinas para gravar em vídeo uma cirurgia na qual os alunos treinam técnicas de traqueostomia em um suíno vivo. Ações desse tipo – como o furto de cães em laboratório paulista – preocupam os cientistas

Cinco dias depois que um grupo de ativistas invadiu o Instituto Royal, em São Roque (interior de São Paulo), para pegar 178 cachorros da raça beagle, em protesto contra o uso de animais como cobaias, mais um ato foi registrado no interior de São Paulo. Algumas pessoas interromperam uma aula prática do curso de medicina da Pontifícia Universidade Católica (PUC) de Campinas, em que seis porcos eram usados para ensinar a técnica da traqueostomia (abertura de um orifício na traqueia para permitir a respiração) aos alunos. Os ativistas filmaram os procedimentos e, depois, deixaram o local. Mais tarde, uma reação do governo à onda recente de manifestações veio em forma de nota, assinada pelo Conselho Nacional de Controle de Experimentação Animal (Concea), do Ministério da Ciência, Tecnologia e Inovação.

Marcelo Morales, coordenador do Concea, afirmou ao Correio que vê com ressalvas o panorama atual dos protestos. “É muito preocupante esse movimento obscurantista que vem ocorrendo no Brasil, cuja origem nós não sabemos, mas que mostra uma irracionalidade, um atraso muito grande”, afirmou o pesquisador, que também integra a diretoria da Sociedade Brasileira para o Progresso da Ciência (SBPC). De acordo com ele, é o desenvolvimento de remédios e tratamentos para doenças que está ameaçado diante do “radicalismo” dos que se intitulam defensores dos animais.

“Ativistas falam o que leem na internet, não têm base científica. É lorota a afirmação de que podemos usar métodos alternativos para a maior parte dos procedimentos. Eles são pouquíssimos atualmente. Já utilizamos, nas faculdades de medicina, meios de diminuir a quantidade de animais, como filmar as aulas. Mas nem sempre é possível. Em um momento, os alunos precisam do animal”, afirma Morales.

(Renata Mari/Correio Braziliense)

http://www.correiobraziliense.com.br/app/noticia/politica-brasil-economia/33,65,33,12/2013/10/25/interna_brasil,395271/cirurgia-em-porco-acirra-debate-do-uso-de-cobaias-em-experimentos.shtml

Matéria da Folha de S.Paulo sobre o assunto:

Ativistas invadem aula prática com porcos na PUC de Campinas

http://www1.folha.uol.com.br/fsp/cotidiano/135652-ativistas-invadem-aula-pratica-com-porcos-na-puc-de-campinas.shtml

*   *   *

Fiocruz divulga nota pública em defesa do uso de animais em pesquisas científicas (Jornal da Ciência)

Documento ressalta que medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais 

A Fundação Oswaldo Cruz (Fiocruz), divulgou nota em que reafirma perante a sociedade seu compromisso ético no uso de animais para finalidades científicas. O texto ressalta que a ciência não pode prescindir do uso de animais em experimentação. “Medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais”, diz o texto.

O documento aborda o fato de as pesquisas científicas envolvendo animais serem pautadas por princípios de bem-estar animal, e de que a atividade é regulamentada por dispositivos legais nacionais e internacionais. A Fiocruz lembra ainda que a Lei 11.794/2008, que regulamenta o uso científico de animais, foi amplamente defendida por sua comunidade.

(Jornal da Ciência)

Leia o documento:

Nota pública: a Fiocruz e o uso de animais em pesquisas científicas

A Fundação Oswaldo Cruz (Fiocruz), instituição que desde 1900 atua a serviço da saúde pública e da população brasileira, frente aos acontecimentos recentes observados no país, vem a público cumprir seu papel de esclarecimento e reafirmar perante a sociedade seu compromisso ético no uso de animais para finalidades científicas.

É fundamental ressaltar que, apesar de muitos esforços em todo o mundo, nas condições atuais, a ciência não pode prescindir do uso de animais em experimentação. Importante pontuar ainda que os medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais. As atividades de experimentação animal são necessárias, inclusive, no campo da veterinária.

As pesquisas científicas envolvendo animais são pautadas pelos princípios de bem-estar animal, adotando-se, dentre outros, os critérios de redução, utilizando-se o menor número possível de animais a cada experimento, e de substituição do uso de animais por outra estratégia sempre que tecnicamente viável.

A atividade é regulamentada por dispositivos legais nacionais e internacionais, ao mesmo tempo em que vigoram instâncias regulatórias de diversos níveis, ligadas ao Governo Federal (Conselho Nacional de Controle de Experimentação Animal – Concea), aos Conselhos de Veterinária e também no âmbito interno das instituições científicas (os Comitê de Ética no Uso de Animais – CEUAs).

A Fiocruz aproveita a oportunidade para informar à sociedade que a Lei 11.794/2008, que regulamenta a Constituição Federal sobre o uso científico de animais, foi amplamente defendida por sua comunidade, inclusive tendo sido relatada pelo então deputado federal Sergio Arouca, sanitarista e ex-presidente da Fiocruz. Além disso, a Fundação foi uma das primeiras instituições a estabelecer uma CEUA no país. Esta instância é responsável por aprovar todos os projetos científicos que incluem o uso de animais, verificando a ética nos procedimentos, a quantidade de animais, entre outras questões.

Secretária-executiva de painel da ONU chora ao falar sobre mudanças climáticas (O Globo)

23 de outubro de 2013

Christiana Figueres, chefe do IPCC, ficou emocionada ao falar sobre impacto das alterações nas futuras gerações em conferência em Londres

Secretária-executiva do Painel Intergovernamental sobre Mudanças Climáticas da ONU (IPCC), a costa-riquenha Christiana Figueres fez uma defesa apaixonada das negociações em torno de um novo acordo global para combater o problema em conferência nesta segunda-feira em Londres. Em seu discurso durante o evento, Figueres reclamou da lentidão nas conversas, mas mostrou-se otimista quanto à possibilidade de em 2015 ser assinado um acerto que obrigue o cumprimento de metas de redução da emissão de gases do efeito estufa pelos principais países poluidores do mundo a partir de 2020.

– Sempre fico frustrada com o ritmo das negociações, nasci impaciente – disse Figueres. – Estamos avançando muito, muito devagar, mas estamos indo na direção certa e é isso que me dá coragem e esperança.

E Figueres manteve o tom apaixonado depois do discurso. Abordada por um repórter da rede britânica de TV BBC, que lhe perguntou sobre o impacto das mudanças climáticas, ela ficou emocionada e chegou a chorar.

– Estou comprometida com (a luta contra) as mudanças climáticas por causa das futuras gerações e não por nós, certo? Nós estamos partindo daqui – disse. – Simplesmente sinto que é totalmente injusto e imoral o que estamos fazendo com as futuras gerações. Estamos condenando elas antes mesmo de elas nascerem. Mas temos uma escolha sobre isso, esta é a questão, temos uma escolha. Se (as mudanças climáticas) forem inevitáveis, então que sejam, mas temos a escolha de tentar mudar o futuro que vamos dar às nossas crianças.

(O Globo)

http://oglobo.globo.com/ciencia/secretaria-executiva-de-painel-da-onu-chora-ao-falar-sobre-mudancas-climaticas-10488256#ixzz2iYDyHz8N

Homem evolui mais devagar que macaco, diz estudo (Folha de S.Paulo)

24 de outubro de 2013

Reportagem da Folha de SP mostra que pesquisa descobriu que diferenças entre espécies está em genes ativos

A comparação da atividade genética de humanos com a de chimpanzés sugere que o Homo sapiens está evoluindo de forma mais lenta que os macacos. A descoberta foi feita por cientistas que investigam por que o homem e seu primo mais próximo são tão diferentes, apesar de terem 98% do DNA idêntico.

O segredo das diferenças físicas e comportamentais está em quais genes são de fato ativos em cada espécie. Analisando células embrionárias, a brasileira Carolina Marchetto, do Instituto Salk, de San Diego (EUA), descobriu mecanismos que freiam a taxa de transformação genética da espécie humana.

A descoberta favorece a hipótese de que o advento da cultura desacelerou a evolução biológica: uma vez que humanos se adaptam a distintos ambientes usando o conhecimento, nossa espécie não depende mais tanto de variação genética para evoluir e sobreviver a mudanças.

Já os macacos, mamíferos de cognição mais limitada, precisam que seu DNA evolua de forma rápida para sobreviver a mudanças: eles não têm como compensar a falta de características inatas necessárias usando apenas conhecimento e tecnologia.

Mas o DNA humano também não carece de evoluir? “Não sabemos o que estamos pagando por isso em termos de adaptação, mas por enquanto funciona de forma eficiente”, diz Marchetto.

O trabalho da cientista, descrito hoje na revista “Nature”, ajuda a explicar o mistério da maior diversidade do DNA símio. Um leigo pode achar que todos os chimpanzés são iguais, mas uma só colônia selvagem desses macacos na África tem mais variabilidade genética do que toda a humanidade.

O PULO DO GENE

Segundo o estudo de Marcheto, a maior variabilidade genética dos macacos tem a ver com os chamados transpósons, genes que saltam de um lugar para outro dos cromossomos. Nesse processo, os transpósons reorganizam o genoma, ativando alguns genes e desativando outros.

Esses “genes saltadores” são bastante ativos em chimpanzés e bonobos (macacos igualmente próximos da linhagem humana). Em humanos, o transpóson é suprimido por dois outros genes que são ativados em abundância e inibem o “pulo” genético.

Chimpanzés, de certa forma, precisam de transpósons. Com ferramentas rudimentares e sem linguagem para transmitir conhecimento, eles têm de oferecer maior variabilidade genética à seleção natural para que ela os torne mais bem adaptados, caso o ambiente se altere.

A pesquisa de Marchetto só foi possível porque seu o laboratório no Salk, liderado pelo biólogo Fred Gage, domina a técnica de reverter células ao estágio embrionário.

O material usado na pesquisa foi extraído da pele de macacos e pessoas, pois há uma série de limitações para o uso de embriões em experimentos científicos.

Revertido ao estágio de “células pluripotentes induzidas”, o tecido cutâneo se comporta como embrião, e é possível investigar a biologia molecular dos estágios iniciais do desenvolvimento, quando o surgimento de diversidade genética tem consequências futuras.

“Uma das coisas especiais do nosso estudo é que a reprogramação de células de chimpanzés e bonobos nos dá um modelo para começar a estudar questões evolutivas que antes não tínhamos como abordar”, diz Marchetto.

RUMO AO CÉREBRO

As diferenças de ativação de genes entre humanos e chimpanzés, explica, não se restringem a células embrionárias. A ideia de Marcheto e de seus colegas agora é transformar essas células em neurônios, por exemplo, para entender como a biologia molecular de ambos se altera durante a formação do cérebro.

(Rafael Garcia/ Folha de São Paulo)

http://www1.folha.uol.com.br/ciencia/2013/10/1361208-homem-evolui-mais-devagar-que-macaco-diz-estudo.shtml

Mais sobre o salvamento dos Beagles do Instituto Royal (24/10/2013)

Animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta, diz ex-coordenador do Concea (Jornal da Ciência)

Em entrevista exclusiva ao Jornal da Ciência, Renato Cordeiro, pesquisador da Fiocruz, fala da importância da experimentação animal para a ciência 

Pesquisador titular da Fiocruz, Renato Cordeiro foi o primeiro coordenador do Conselho Nacional de Controle de Experimentação Animal (Concea), criado pela Lei Arouca (Lei 11.794/ 2008), que regulamenta a criação e utilização de animais em atividades de ensino e pesquisa científica no país. Nesta entrevista exclusiva para o Jornal da Ciência, ele fala das regras e da importância desse tipo de experimentação para a humanidade.

As declarações de Cordeiro serão utilizadas em matéria a ser publicada na próxima edição do Jornal da Ciência impresso. As reportagens abordarão diversos aspectos da invasão do Instituto Royal, em São Roque, a 59 quilômetros de São Paulo, na madrugada do dia 18 de outubro. Durante a invasão, ativistas de proteção dos animais levaram do instituto 178 cães da raça beagle e sete coelhos, deixando para trás centenas de ratos. O ato tem sido rechaçado pela comunidade científica e visto como prejudicial à ciência.

Jornal da Ciência – Qual a importância da utilização de animais na pesquisa científica?

Cordeiro – Grandes avanços na saúde pública  foram propiciados à humanidade, graças à utilização de animais na pesquisa científica. Podemos citar como exemplos a descoberta e o controle de qualidade de vacinas contra a pólio, o sarampo, a difteria, o tétano, a hepatite, a febre amarela e a meningite. Testes com animais também foram essenciais para a descoberta de anestésicos, de antibióticos e dos anti-inflamatórios, de fármacos para o controle da hipertensão arterial e diabetes , da dor e da asma, para tratamento da ansiedade , dos antidepressivos, dos quimioterápicos, e dos hormônios anticoncepcionais. Atualmente, vários trabalhos estão sendo desenvolvidos em laboratórios brasileiros visando a descoberta de vacinas e medicamentos para a Malária, a Aids , Dengue , Tuberculose e outras doenças. Poderíamos, portanto, dizer que os animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta.

Por que não é possível abrir mão desse tipo de experimentação?

Embora técnicas altamente sofisticadas e equipamentos com alta tecnologia sejam necessários para que algumas pesquisas sejam realizadas, em virtude da complexidade da célula biológica, o uso de animais de laboratório ainda é necessário para sua execução. Vale ressaltar que vários pesquisadores, no Brasil e no exterior, já desenvolvem grandes esforços visando a descoberta de métodos alternativos, para que algum dia os animais não sejam mais necessários ou utilizados em pesquisas experimentais. Atualmente, porém, somente em alguns poucos casos a Biologia Celular e Molecular, através de técnicas de cultura de tecidos, e simulações computacionais oferecem essa possibilidade. Neste sentido, é um grande equívoco, irresponsabilidade e desconhecimento da realidade ir para a mídia afirmar que os animais não são mais necessários para a descoberta de novas vacinas, medicamentos e terapias.

De que forma esses testes são regulados?

No Brasil, um grande marco para a pesquisa cientifica na área da saúde ocorreu com a aprovação da Lei 11.794 de outubro de 2008, conhecida como Lei Arouca, que regulamentou a criação e utilização de animais em atividades de ensino e pesquisa científica no país. A nova lei criou o Conselho Nacional de Controle de Experimentação Animal (Concea), e obrigou as Instituições de Pesquisa a constituírem uma Comissão de Ética no Uso de Animais (Ceua).

A Resolução nº 1 do Concea determinou as competências das Ceuas , que são os componentes essenciais para aprovação, controle e vigilância das atividades de criação, ensino e pesquisa científica com animais, bem como para garantir o cumprimento das normas de controle da experimentação animal editadas pelo Concea.

As Ceuas representam uma grande mudança de cultura na ciência e são formadas por médicos veterinários e biólogos, docentes e pesquisadores na área especifica da pesquisa cientifica , e um representante de sociedades protetoras de animais legalmente constituídas e estabelecidas no país.

Ligado ao Ministério da Ciência e Tecnologia, o Concea tem apresentado um desempenho excelente. Uma de suas principais competências é expedir e fazer cumprir normas relativas à utilização humanitária de animais com finalidade de ensino e pesquisa cientifica; credenciar instituições brasileiras para a criação ou utilização de animais em ensino e pesquisa científica; e monitorar e avaliar a introdução de técnicas alternativas que substituam a utilização de animais em ensino e pesquisa.

E como é a participação das entidades defensoras dos animais nesse processo?

Os representantes das sociedades protetoras de animais legalmente estabelecidas no país são profissionais muito qualificados, com formação pós-graduada em nível de doutorado, e têm dado excelentes contribuições nas discussões e deliberações do Concea. A Diretriz Brasileira para o Cuidado e a Utilização de Animais para fins Cientificos e Didáticos (DBCA), publicada na Resolução Normativa 12 , de 20 de setembro de 2013 , uma bíblia para os laboratórios de pesquisa no Brasil, é um dos recentes exemplos de competência dos membros do colegiado.

Qual a importância da experimentação animal para os próprios animais?

Os animais domésticos como cães e gatos e os de interesse econômico como bovinos, suínos e aves também têm sido beneficiados com os avanços da ciência no campo da terapêutica e cirurgia experimental. Poderíamos destacar as vacinas para a raiva ,a cinomose , a febre aftosa, as pesquisas com o vírus da imunodeficiência felina, a tuberculose e várias doenças infecto-parasitárias.

(Mario Nicoll / Jornal da Ciência)

Leia também:

ABC e SBPC se manifestam contra a invasão do Instituto Royal – Texto assinado em conjunto pelos presidentes das entidades, Jacob Palis e Helena Nader

http://www.jornaldaciencia.org.br/Detalhe.php?id=90153

Especialista da Fiocruz considera equívoco invasão ao Instituto Royal – Para Marco Aurélio Martins, o ataque de ativistas aos experimentos científicos é uma tentativa de desinformar “irresponsavelmente” a população

http://www.jornaldaciencia.org.br/Detalhe.php?id=90093

*   *   *

Instituto Royal nega que usava animais em testes de cosméticos ou de produtos de limpeza (Agência Brasil)

Médico considera “sensacionalismo” as imagens publicadas por ativistas em redes sociais com cães mutilados

O Instituto Royal negou ontem (23), por meio de um vídeo gravado pela gerente-geral da instituição, Silva Ortiz, que fazia teste de cosméticos ou de produtos de limpeza nos animais. Na madrugada de sexta-feira (18), ativistas invadiram o instituto e retiraram 178 cães da raça beagle, que eram usados em testes científicos. Os ativistas alegaram que os animais foram vítimas de maus-tratos e que eram usados como cobaias em testes de cosméticos e produtos de limpeza.

“Nós não fazemos testes de cosméticos em animais, este tipo de teste é feito apenas pelo método in vitro, ou seja, dentro de equipamentos de laboratórios, sem animais”, disse a gerente. Segundo ela, as pesquisas eram voltadas para medicamentos e fitoterápicos, para tratar doenças como câncer, diabetes, hipertensão e epilepsia, entre outras, bem como para o desenvolvimento de medicamentos antibióticos e analgésicos. “O objetivo é testar a segurança de novos medicamentos de forma que possam ser usados por pessoas como eu e você”.

De acordo com o médico Marcelo Morales, coordenador do Conselho Nacional de Controle de Experimentação Animal (Concea) e membro da diretoria da Sociedade Brasileira para o Progresso da Ciência (SBPC), os animais retirados pelos ativistas do laboratório estão em perigo.

“Não se pode tirar animais que foram criados em biotérios [instalação com características próprias e adequadas, como um ambiente protegido, onde são criados ou mantidos animais utilizados com cobaias em testes] dessa forma repentina, porque eles podem morrer. Eles estão em risco neste momento. Esses animais são especiais, eles tem que ter atenção de médicos veterinários desde que nasceram. Havia animais idosos, com problemas renais e que eram acompanhados diariamente. Quando ele são retirados do instituto, estão em perigo. Até prontuários foram roubados”, disse.

O médico considera “sensacionalismo” as imagens publicadas por ativistas em redes sociais com cães mutilados. “Animal sem olho é sensacionalismo dos ativistas. O animal que apareceu com a língua ferida se feriu durante uma briga com outro animal e foi tratado. Já estava totalmente sem problemas”, informou.

De acordo com a presidente da Comissão de Ética em Experimentação Animal (Ceea) da Unicamp, Ana Maria Guaraldo, a evolução das pesquisas em células-tronco, da distrofia muscular e da doença de Chagas foi possível por meio da pesquisa com animais. “O marcapasso foi primeiro utilizado para o cão. Hoje quantas pessoas estão com a vida melhor porque arritmia está normal?”, questiona a pesquisadora.

Ana Maria defende que os ativistas se informem mais sobre as pesquisas em laboratório com animais e descarta a substituição total de animais em pesquisas científicas. “Dentro da lei existe uma previsão de que os métodos alternativos serão desenvolvidos e validados para diminuir o tipo de animais que se adota. O processo leva, em média, dez anos até chegar a validação desses novos métodos e quem desenvolve os métodos alternativos são os pesquisadores dentro de laboratórios”, explica.

Nas pesquisas são usados diversos tipos de animais, como camundongos, ratos, cães, ovelhas, peixes, gambás, tatus, pombas, primatas, codornas, equinos, entre outros. Segundo a pesquisadora, as novas moléculas devem ser testadas em dois roedores e um terceiro animal não roedor para que as pesquisas obtenham validação, segundo protocolos internacionais. “Os cães da raça beagle são dóceis e têm tamanho compatível. São animais que têm toda uma padronização internacional e já estão nos laboratórios do mundo todo há muito tempo”, disse Ana Maria.

Do outro lado, a coordenadora do Laboratório de Ética Prática, do Departamento de Filosofia da Universidade Federal de Santa Catarina (UFSC), Sônia Felipe, defende a extinção do uso de animais em pesquisas científicas. A professora alega que os métodos que usam animais podem ser cruéis e causar extremo sofrimento aos animais. “Os experimentos mais dolorosos, os de infecções, inflamações, os neurológicos, os lesivos com ácidos, fogo e todo tipo de danos internos ou externos não admitem analgesia, nem anestesia, porque mascara o resultado”, explica.

A pesquisadora também aponta que há alternativas para pesquisa científica sem o uso de animais, mas há desinteresse da indústria farmacêutica em aprofundar os conhecimentos em protocolos alternativos. “Essas formas estão relegadas pela ciência, porque muitas delas não fariam qualquer pessoa dirigir-se às farmácias na esperança de obter alívio ou cura para suas doenças. Se os humanos estão doentes, a maioria deles é por seguir uma dieta agressiva para sua saúde”, acredita Sônia Felipe.

(Heloisa Cristaldo/ Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/instituto-royal-nega-que-usava-animais-em-testes-de-cosmeticos-ou-de-produtos-de-limpeza

O Globo

Instituto Royal divulga vídeo negando maus tratos e uso de cosméticos em beagles

http://oglobo.globo.com/pais/instituto-royal-divulga-video-negando-maus-tratos-uso-de-cosmeticos-em-beagles-10517592#ixzz2ieEqfs7G

*   *   *

Ministro diz que invasão de ativistas ao Instituto Royal foi “um crime” (Agência Brasil)

Segundo o ministro, quando se discutiu a legislação, discutiu-se também a necessidade que a comunidade científica de fazer testes com relação a novos medicamentos

O ministro da Ciência, Tecnologia e Inovação, Marco Antonio Raupp, condenou ontem (23), na Câmara dos Deputados, a invasão do Instituto Royal, em São Paulo, por ativistas de direitos dos animais. Para o ministro, o episódio, ocorrido na sexta-feira (18) passada, foi um “crime”. No incidente, os militantes retiraram do local 178 cachorros da raça beagle que eram usados em pesquisa científica.

“Essa invasão é um crime. Foi feita à revelia da lei. Quando se discutiu a legislação, discutiu-se também a necessidade que a comunidade científica tem – tanto as agências públicas, as universidades como as empresas – de fazer testes com relação a novos medicamentos. Em todo o mundo é assim. Não é só no Brasil não.”

Raupp foi à Câmara dos Deputados para participar de audiência pública conjunta de comissões temáticas da Casa sobre o Projeto de Lei do Código Nacional de Ciência e Tecnologia (PL 2.177/2011) que teve parecer apresentado hoje pelo relator, deputado Sibá Machado (PT-AC). Segundo o ministro, pela sua importância, trata-se de uma “miniconstituinte da Ciência e Tecnologia”, que vai dar um grande impulso ao setor no país.

Ficou decidido que a Comissão de Ciência e Tecnologia da Câmara dos Deputados vai pedir ao colégio de líderes, na próxima semana, para colocar em votação no plenário o projeto de lei. A votação na comissão também ficou para a próxima semana, mas antes o relator vai se reunir com representantes de ministérios que participaram da audiência – Educação; Ciência, Tecnologia e Informação; Desenvolvimento, Indústria e Comércio Exterior; e Defesa – para discutir alterações no substitutivo que apresentou, acolhendo pontos considerados importantes por esses setores.

(Jorge Wamburg/Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/ministro-diz-que-invasao-de-ativistas-ao-instituto-royal-foi-%E2%80%9Cum-crime%E2%80%9D

*   *   *

Anvisa analisa legislação que trata do uso de animais para fins científicos (Agência Brasil)

As regras para o uso de animais em pesquisa são definidas pela Lei Arouca e pelos comitês de ética em pesquisa com animais ligados ao Sistema de Comitês de Ética em Pesquisa

A legislação que trata do uso de animais para fins científicos e didáticos está sob análise da Agência Nacional de Vigilância Sanitária (Anvisa). A autarquia avalia se há lacunas referentes à fiscalização das pesquisas para produção de medicamentos e cosméticos que podem ter impacto no uso de cobaias.

De acordo com a Anvisa, a legislação atual não especifica o órgão responsável pela fiscalização dos laboratórios de pesquisa em animais. No âmbito da agência reguladora, não há exigência expressa para o uso de animais em testes, mas é necessária a apresentação de dados que comprovem a segurança dos diversos produtos registrados na Anvisa. Métodos alternativos são aceitos desde que sejam capazes de comprovar a segurança do produto.

Na semana passada, a autarquia informou, por meio de nota, ter firmado há dois anos cooperação com o Centro Brasileiro de Validação de Métodos Alternativos (Bracvam), ligado à Fundação Oswaldo Cruz (Fiocruz), para que sejam validados métodos alternativos que dispensem o uso de animais.

As regras para o uso de animais em pesquisa são definidas pela Lei 11.794, batizada de Lei Arouca, e pelos comitês de ética em pesquisa com animais ligados ao Sistema de Comitês de Ética em Pesquisa. Por definição da Lei Arouca, as instituições que executam atividades com animais podem receber cinco tipos de punição, que vão da advertência e suspensão de financiamentos oficiais à interdição definitiva do laboratório. A multa pode variar entre R$ 5 mil e R$ 20 mil.

Responsável por regular as atividades científicas com animais, o Conselho Nacional de Controle de Experimentação Animal (Concea), ligado ao Ministério da Ciência, Tecnologia e Inovação, determina, por meio de diretriz, que atividades científicas ou didáticas devem considerar a substituição do uso dos animais, a redução do número de cobaias usadas, além do refinamento de técnicas que permitam reduzir o impacto negativo sobre o bem-estar deles.

A diretriz também orienta os profissionais a escolher métodos humanitários para condução dos projeto e a avaliar os animais regularmente para observar evidências de dor ou estresse agudo no decorrer do projeto e a usar agentes tranquilizantes, analgésicos e anestésicos adequados para a espécie animal e para os objetivos científicos ou didáticos.

(Heloisa Cristaldo/ Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/anvisa-analisa-legislacao-que-trata-do-uso-de-animais-para-fins-cientificos

*   *   *

ABC e SBPC se manifestam contra a invasão do Instituto Royal

23 de outubro de 2013

Texto assinado em conjunto pelos presidentes das entidades, Jacob Palis e Helena Nader

A Academia Brasileira de Ciências e a Sociedade Brasileira para o Progresso da Ciência em conjunto com as demais entidades representantes da Comunidade Científica rechaçam os atos violentos praticados contra o Instituto Royal, em São Roque-SP, que realiza estudos de avaliação de risco e segurança de novos medicamentos.

É importante esclarecer a sociedade brasileira sobre  o importante trabalho de pesquisa realizado no Instituto Royal voltado para o desenvolvimento do Brasil. O Instituto foi credenciado pelo Conselho Nacional de Controle em Experimentação Animal (CONCEA) e cada um de seus projetos avaliados e aprovados  por um Comitê de Ética para o Uso em Experimentação Animal (CEUA), obedecendo em todos os aspectos ao estabelecido pela Lei Arouca, número 11.794, aprovada pelo Congresso Nacional em 2008. Esta lei regulamenta o uso responsável de criação e utilização de animais em atividades de ensino e pesquisa científica, em todo o território nacional, impedindo que a vida animal seja sacrificada em vão.

Saibam os cidadãos brasileiros que o CONCEA conta em seus quadros com representantes das Sociedades Protetoras de Animais legalmente estabelecidas no País, e que na história da medicina mundial, descobertas fundamentais foram realizadas, milhões de mortes evitadas e expectativas de vida aumentadas, graças à utilização dos animais em pesquisas para a saúde humana e animal.

O Instituto Royal é dirigido pelo Prof. João Antonio Pegas Henriques, Membro Titular da Academia Brasileira de Ciências e sócio ativo da SBPC, pesquisador 1-A do CNPq, orientador de programas de pós-graduação, sempre criterioso, competente. Este Instituto é de sobremaneira importante para que o Brasil venha se capacitar de forma efetiva na produção de medicamentos e insumos para a saúde humana e animal.

É fundamental que as autoridades, mas principalmente que a sociedade em geral, impeçam atos equivocados que destroem anos de importante atividade científica, e garantam as atividades de pesquisa desenvolvidas nas Universidades e Instituições de Pesquisa brasileiras.

Em 22 de Outubro de 2013

Jacob Palis

Presidente da Academia Brasileira de Ciências

Helena Bociani Nader

Presidente da Sociedade Brasileira para o Progresso da Ciência

*   *   *

Comportamento animal (Folha de S.Paulo)

23 de outubro de 2013

Editorial da Folha de S.Paulo sobre experimentos científicos em animais

O uso de animais em experimentos científicos é um tema de debate público que pode ser facilmente enredado numa polarização estéril.

Num extremo se aglutina o radicalismo sentimental dos que reputam defensável violar leis e propriedades para “salvar” animais de alegados maus-tratos. No outro, o pragmatismo míope dos que tomam o avanço da pesquisa como um valor superior a justificar qualquer forma de sofrimento animal.

O acirramento se repetiu em diversos países e, como no Brasil, o debate se desencaminhou – estão aí, para prová-lo, a invasão de um biotério em São Roque (SP) e a legião de apoiadores que encontrou.

Não se chegou aqui, ainda, ao paroxismo alcançado no Reino Unido em 2004, quando a Frente de Libertação Animal impediu, com ameaças e ataques, a construção de centros de testes com animais em Oxford e Cambridge.

Faz muito, entretanto, que a discussão se emancipou do extremismo irracional. Pesquisadores são grandes interessados em diminuir o uso de animais, porque isso custa caro e expõe seus estudos a questionamentos éticos.

Em alguns casos, porém, tal recurso ainda é inevitável, como testes de carcinogenicidade (capacidade de provocar tumores). Banir todas as cobaias implicaria impedir testes de segurança em novos produtos, muitos dos quais criados para aliviar o sofrimento humano.

É inescapável, assim, render-se a uma hierarquia de valores entre as espécies: uma vida humana vale mais que a de um cão, que vale mais que a de um rato. Os próprios invasores do instituto em São Roque, aliás, resgataram 178 cães e deixaram os roedores para trás.

Isso não significa autorizar cientistas a atormentar, mutilar ou sacrificar quantos animais quiserem. A tendência civilizatória tem sido submetê-los ao que ficou conhecido, em inglês, como a regra dos três Rs: “replacement” (substituição), “reduction” (redução) e “refinement” (aperfeiçoamento).

Em primeiro lugar, trata-se de encontrar substitutos. Muito progresso se fez com sistemas “in vitro”, como o cultivo de tecidos vivos para testar substâncias potencialmente tóxicas. Depois, quando os animais são imprescindíveis, cabe reduzir ao mínimo o número de espécimes. O terceiro imperativo é refinar métodos para prevenir sofrimento desnecessário.

São os princípios que governam várias leis nacionais sobre a questão, como a de número 11.794/2008 no Brasil. Numa democracia viva, como a nossa, há caminhos institucionais tanto para cumpri-la quanto para modificá-la, e invasões tresloucadas não se encontram entre os admissíveis.

(Folha de S. Paulo)

http://www1.folha.uol.com.br/fsp/opiniao/135200-comportamento-animal.shtml

Texto complementar publicado na Folha:

O sentimento dos animais

http://www1.folha.uol.com.br/fsp/cotidiano/135252-o-sentimento-dos-animais.shtml

*   *   *

FeSBe divulga manifesto em repúdio à invasão do Instituto Royal

23 de outubro de 2013

Representante de sociedades científicas ligadas à biologia experimental considera que depredações, vandalismo e roubo devem ser punidos com rigor

A Federação de Sociedades de Biologia Experimental (FeSBE) divulgou manifesto para expressar seu repúdio à invasão do Instituto Royal, em São Roque, SP. De acordo com o texto, a sociedade quer melhor qualidade de vida, que a expectativa de vida aumente e que a saúde animal evolua no mesmo ritmo. “A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos”, diz o manifesto.

Leia o documento na íntegra:

“Manifesto sobre experimentação animal

A Federação de Sociedades de Biologia Experimental (FeSBE) vem a público expressar o seu repúdio à invasão, depredação e furto qualificado de animais de experimentação do Instituto Royal, em São Roque. Na segunda década do século XXI, não é mais possível que atitudes como essa, só explicáveis pelo obscurantismo que ainda domina grupos minoritários de nossa sociedade, sejam toleradas, em qualquer nível. O referido Instituto segue normas técnicas e éticas do Conselho Nacional de Controle da Experimentação Animal (CONCEA), além dos requisitos de outros organismos nacionais e internacionais, conduzindo pesquisas de elevada relevância no desenvolvimento de medicamentos e outros produtos, fundamentais tanto na saúde humana como animal! Assim, destruir um patrimônio desses ou impedir que a instituição continue a fazer essas pesquisas implica inclusive em desrespeito aos próprios animais. A Lei 11794, ou Lei Arouca, rege as pesquisas com animais no Brasil, e deve ser respeitada como as outras leis que regem todas as nossas atitudes diárias como cidadãos. Transgressões eventuais da Lei Arouca devem ser punidas com todo o rigor da Lei; depredações, vandalismo, roubo e bloqueio dos direitos de outros também devem ser punidos com o mesmo rigor, dentro do Estado de Direito em que vivemos. Qualquer postura diferente dessa significa o afastamento do Estado de Direito, com as óbvias consequências que daí podem advir.

A FeSBE, como representante de sociedades científicas ligadas à biologia experimental, apoia e sempre apoiará as pesquisas científicas conduzidas dentro dos princípios científicos e éticos, que são de domínio público, incluindo os que regem a experimentação animal. A sociedade em geral quer uma melhor qualidade de vida, quer que a expectativa de vida aumente e quer que a saúde animal evolua no mesmo ritmo. A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos nos próximos tempos.

Diretoria da FeSBE”

Aumentam as doenças crônicas entre indígenas do Xingu (Fapesp)

Antes raras ou inexistentes, elas agora apresentam índices preocupantes entre o povo Khisêdjê, indica estudo feito na Unifesp (foto: Gimeno e colaboradores)

14/10/2013

Por Noêmia Lopes

Agência FAPESP – Malária, infecções respiratórias e diarreias eram as principais causas de morte no Parque Indígena do Xingu (PIX), no Mato Grosso, em 1965 – época em que a Escola Paulista de Medicina (EPM), atualmente parte da Universidade Federal de São Paulo (Unifesp), passou a responder pela saúde dos povos indígenas que lá vivem.

Hoje, a malária está sob controle e, embora as doenças infecciosas e parasitárias ainda sejam relevantes em termos de mortalidade, são os males crônicos não transmissíveis, como hipertensão, intolerância à glicose e dislipidemia (aumento anormal da taxa de lipídios no sangue), que estão em crescimento.

Conhecendo esse panorama, pesquisadores da EPM/Unifesp examinaram e entrevistaram 179 índios Khisêdjê, moradores da área central do parque do Xingu, no Mato Grosso, entre 2010 e 2011.

A análise dos resultados mostrou uma prevalência de hipertensão arterial de 10,3% em ambos os sexos, sendo que 18,7% das mulheres e 53% dos homens apresentaram níveis de pressão arterial considerados preocupantes.

“Para valores de pressão arterial iguais ou maiores a 140/90 mmHg como indicativos da presença de hipertensão arterial, pesquisas encontraram prevalências entre 22,3% e 43,9% na população geral do Brasil”, disse Suely Godoy Agostinho Gimeno, coordenadora do estudo com os Khisêdjê e pesquisadora do Departamento de Medicina Preventiva da EPM/Unifesp e do Instituto de Saúde da Secretaria Estadual da Saúde de São Paulo.

O estudo com os Khisêdjê foi realizado com apoio da FAPESP e do Projeto Xingu, iniciativa da Unidade de Saúde e Ambiente do Departamento de Medicina Preventiva da EPM/Unifesp.

Os Khisêdjê ainda não estão tão hipertensos como os demais brasileiros, mas o cenário é delicado, uma vez que tal condição era inexistente ou rara nas aldeias brasileiras até décadas atrás.

Já a intolerância à glicose foi identificada em 30,5% das mulheres (6,9% do total com diabetes mellitus) e em 17% dos homens (2% do total com diabetes mellitus). E a dislipidemia (aumento anormal da taxa de lipídios no sangue) apareceu em 84,4% dos participantes dos dois sexos.

“Examinamos os Khisêdjê anteriormente, entre 1999 e 2000. Comparando os dados daquela época com os mais recentes, percebemos um aumento significativo de todas essas doenças crônicas não transmissíveis. Outras pesquisas revelam que o mesmo aumento se aplica aos demais povos indígenas do Xingu e de outras áreas do país”, disse Gimeno.

De acordo com a pesquisadora, entre os fatores que vêm transformando o panorama entre os índios estão maior proximidade com os centros urbanos e intensificação do contato com a sociedade não indígena, com a incorporação de novos hábitos e costumes; aumento do número de indivíduos que exercem atividade profissional remunerada, abandonando práticas de subsistência tradicionais como agricultura, caça e pesca; e maior acesso a produtos e bens de consumo, como alimentos industrializados, eletroeletrônicos e motor de barcos (o que dispensa a necessidade de remar).

Os resultados foram informados aos Khisêdjê, individualmente e em grupo, e a equipe de saúde da Unifesp acompanha os casos que precisam de amparo médico.

Ainda assim, o quadro preocupa os pesquisadores, uma vez que o controle das doenças requer condições nem sempre disponíveis nas aldeias, como refrigeração (no caso da insulina), controle da dose e do horário dos medicamentos, controle regular da glicemia e da pressão arterial. Segundo Gimeno, “o estímulo e a garantia da preservação dos hábitos e costumes desses povos seriam medidas preventivas de grande valia”.

Excesso de peso

A coleta de dados para traçar o perfil nutricional e metabólico dos Khisêdjê foi realizada em diferentes períodos de 2010 e 2011, quando os pesquisadores passavam de 15 a 20 dias na aldeia principal desse povo, chamada Ngojwere.

As informações levantadas incluíram perímetros de braços, cintura e quadril; peso; altura; composição corporal (água, massa magra e massa gordurosa); pressão arterial; perfil bioquímico (por exames como o de glicemia); aptidão física; condição socioeconômica; consumo de alimentos e práticas agrícolas.

Outro resultado obtido por meio dessa análise foi a prevalência de excesso de peso (de sobrepeso ou de obesidade): 36% entre as mulheres e 56,8% entre os homens.

“Contudo, observamos que, particularmente entre os homens, tal prevalência se deve a uma maior quantidade de massa muscular e não de tecido gorduroso. Esse dado sugere que, para a população em questão, os critérios de identificação do excesso de peso não são adequados, uma vez que os indivíduos são musculosos, não obesos”, disse Gimeno.

A conclusão, de acordo com a pesquisadora, é corroborada pelos testes de aptidão física. “A maioria dos valores revela força muscular nos membros inferiores, resistência muscular nos membros superiores e no abdômen, flexibilidade e capacidade cardiorrespiratória. Comparados aos não indígenas, os Khisêdjê têm perfil ativo ou muito ativo, contrariando a ideia de que um possível sedentarismo estaria associado às doenças investigadas”, disse.

Uma hipótese (não comprovada empiricamente) que poderia explicar a controvérsia é a de que, no passado, esses índios teriam sido ainda muito mais ativos do que na atualidade. E a possível redução na atividade física habitual teria, então, relação com os males crônicos.

Equipe e repercussão

Três médicos, quatro enfermeiras, cinco nutricionistas, dois educadores físicos, um sociólogo e quatro graduandos (dos cursos de Medicina e Enfermagem) participaram da pesquisa via Unifesp.

Completam a equipe uma sexta nutricionista do Instituto de Saúde da Secretaria Estadual da Saúde de São Paulo, agentes de saúde e professores indígenas que vivem na aldeia Ngojwere e atuaram como intérpretes.

O projeto deu origem a seis apresentações em conferências internacionais e duas em congressos nacionais, três dissertações de mestrado e uma publicação de artigo na revista Cadernos de Saúde Pública. A íntegra do texto pode ser lida em http://www.scielosp.org/pdf/csp/v28n12/11.pdf.

Gene Variants in Immune System Pathways Correlated With Composition of Microbes of Human Body (Science Daily)

Oct. 24, 2013 — Human genes in immunity-related pathways are likely associated with the composition of an individual’s microbiome, which refers to the bacteria and other microbes that live in and on the body, scientists reported today, Oct. 24, at the American Society of Human Genetics 2013 annual meeting in Boston.

Bacterial colonies on an agar plate. This study is the first genome-wide and microbiome-wide investigation to identify the interactions between human genetic variation and the composition of the microbes that inhabit the human body. (Credit: © anyaivanova / Fotolia)

“These genes are significantly enriched in inflammatory and immune pathways and form an interaction network highly enriched with immunity-related functions,” said Ran Blekhman, Ph.D., Assistant Professor, Department of Genetics, Cell Biology, and Development at the University of Minnesota, Minneapolis.

The study is the first genome-wide and microbiome-wide investigation to identify the interactions between human genetic variation and the composition of the microbes that inhabit the human body.

The skin, genital areas, mouth, and other areas of the human body, especially the intestines, are colonized by trillions of bacteria and other microorganisms. “Shifts in the composition of the species of the microbes have been associated with multiple chronic conditions, such as diabetes, inflammatory bowel disease and obesity,” noted Dr. Blekhman.

Dr. Blekhman and his collaborators found evidence of genetic influences on microbiome composition at 15 body sites of 93 people surveyed. “We found in our study that genetic variation correlated with the microbiome at two levels,” he said.

At the individual level, the mathematical procedure known as principal component analysis demonstrated that genetic variation correlated with the overall structure of a person’s microbiome.

At the species level, potential correlations between host genetic variation and the abundance of a single bacterial species were identified, said Dr. Blekhman, who conducted much of the research while a scientist in the lab of Andrew G. Clark, Ph.D., the Jacob Gould Schurman Professor of Population Genetics in the Department of Molecular Biology and Genetics at Cornell University, Ithaca, NY. Dr. Clark is the senior author of the abstract.

To identify the bacterial species that inhabited each human body site, the researchers mined sequence data from the Human Microbiome Project (HMP), an international program to genetically catalog the microbial residents of the human body.

Using a systems-level association approach, the researchers showed that variation in genes related to immune system pathways was correlated with microbiome composition in the 15 host body sites.

To shed light on the evolutionary history of the symbiosis between humans and their microbiomes, the researchers analyzed sequencing data from the 1000 Genomes Project, which is designed to provide a comprehensive resource on human genetic variation.

They found that the genes in the pathways linked to the composition of an individual’s microbiome vary significantly across populations. “Moreover, many of those genes have been shown in recent studies to be under selective pressure,” said Dr. Blekhman.

“The results highlight the role of host immunity in determining bacteria levels across the body and support a possible role for the microbiome in driving the evolution of bacteria-associated host genes,” he added.

Dr. Blekhman is currently investigating the combined role of host genetics and the microbiome in influencing an individual’s susceptibility to such diseases as colon cancer. His goal is to unravel the interaction between host genomic variation and the gut microbiome in colon cancer incidence, evolution and therapeutic response.

Aboriginal Hunting Practice Increases Animal Populations (Science Daily)

Oct. 24, 2013 — In Australia’s Western Desert, Aboriginal hunters use a unique method that actually increases populations of the animals they hunt, according to a study co-authored by Stanford Woods Institute-affiliated researchers Rebecca and Doug Bird. Rebecca Bird is an associate professor of anthropology, and Doug Bird is a senior research scientist.

Aboriginal hunters looking for monitor lizards as fires burn nearby. (Credit: Rebecca Bird)

The study, published today inProceedings of the Royal Society B, offers new insights into maintaining animal communities through ecosystem engineering and co-evolution of animals and humans. It finds that populations of monitor lizards nearly double in areas where they are heavily hunted. The hunting method — using fire to clear patches of land to improve the search for game — also creates a mosaic of regrowth that enhances habitat. Where there are no hunters, lightning fires spread over vast distances, landscapes are more homogenous and monitor lizards are more rare.

“Our results show that humans can have positive impacts on other species without the need for policies of conservation and resource management,” Rebecca Bird said. “In the case of indigenous communities, the everyday practice of subsistence might be just as effective at maintaining biodiversity as the activities of other organisms.”

Martu, the aboriginal community the Birds and their colleagues have worked with for many years, refer to their relationship with the ecosystem around them as part of “jukurr” or dreaming. This ritual, practical philosophy and body of knowledge instructs the way Martu interact with the desert environment, from hunting practices to cosmological and social organization. At its core is the concept that land must be used if life is to continue. Therefore, Martu believe the absence of hunting, not its presence, causes species to decline.

While jukurr has often been interpreted as belonging to the realm of the sacred and irrational, it appears to actually be consistent with scientific understanding, according to the study. The findings suggest that the decline in aboriginal hunting and burning in the mid-20th century, due to the persecution of aboriginal people and the loss of traditional economies, may have contributed to the extinction of many desert species that had come to depend on such practices.

The findings add to a growing appreciation of the complex role that humans play in the function of ecosystems worldwide. In environments where people have been embedded in ecosystems for millennia, including areas of the U.S., tribal burning was extensive in many types of habitat. Many Native Americans in California, for instance, believe that policies of fire suppression and the exclusion of their traditional burning practices have contributed to the current crisis in biodiversity and native species decline, particularly in the health of oak woodland communities. Incorporating indigenous knowledge and practices into contemporary land management could become important in efforts to conserve and restore healthy ecosystems and landscapes.

The study was funded by the National Science Foundation.

Journal Reference:

  1. R. B. Bird, N. Tayor, B. F. Codding, D. W. Bird. Niche construction and Dreaming logic: aboriginal patch mosaic burning and varanid lizards (Varanus gouldii) in AustraliaProceedings of the Royal Society B: Biological Sciences, 2013; 280 (1772): 20132297 DOI:10.1098/rspb.2013.2297

GENERAL OVERVIEW OF THE EFFECTS OF NUCLEAR TESTING (CTBTO)

The material contained in this chapter is based on official government sources as well as information provided by research institutions, policy organizations, peer-reviewed journals and eye witness accounts. 

http://www.ctbto.org/nuclear-testing/the-effects-of-nuclear-testing/

The CTBTO remains neutral in any ongoing disputes related to compensation for veterans of the nuclear test programmes.  

Nuclear weapons have been tested in all environments since 1945: in the atmosphere, underground and underwater. Tests have been carried out onboard barges, on top of towers, suspended from balloons, on the Earth’s surface, more than 600 metres underwater and over 200 metres underground. Nuclear test bombs have also been dropped by aircraft and fired by rockets up to 320 km into the atmosphere.

The National Resources Defense Council estimated the total yield of all nuclear tests conducted between 1945 and 1980 at 510 megatons (Mt). Atmospheric tests alone accounted for 428 mt, equivalent to over 29,000 Hiroshima size bombs.

Frigate Bird nuclear test explosion seen through the periscope of the submarine USS Carbonero (SS-337), Johnston Atoll, Central Pacific Ocean, 1962.

The first nuclear test was carried out by the United States in July 1945, followed by the Soviet Union in 1949, the United Kingdom in 1952, France in 1960, and China in 1964. The National Resources Defense Council estimated the total yield of all nuclear tests conducted between 1945 and 1980 at 510 megatons (Mt). Atmospheric tests alone accounted for 428 mt, equivalent to over 29,000 Hiroshima size bombs.

The amount of radioactivity generated by a nuclear explosion can vary considerably depending upon a number of factors. These include the size of the weapon and the location of the burst. An explosion at ground level may be expected to generate more dust and other radioactive particulate matters than an air burst. The dispersion of radioactive material is also dependent upon weather conditions.

Large amounts of radionuclides dispersed into the atmosphere

Levels of radiocarbon (C14) in the atmosphere 1945 – 2000. Image credit: Hokanomono.

The 2000 Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the General Assemblystates that:
“The main man-made contribution to the exposure of the world’s population [to radiation] has come from the testing of nuclear weapons in the atmosphere, from 1945 to 1980. Each nuclear test resulted in unrestrained release into the environment of substantial quantities of radioactive materials, which were widely dispersed in  the atmosphere and deposited everywhere on the Earth’s surface.”

The first nuclear test was carried out by the United States in July 1945, followed by the Soviet Union in 1949, the United Kingdom in 1952, France in 1960, and China in 1964.

Different types of nuclear tests: (1) atmospheric test; (2) underground test; (3) upper atmospheric test; and (4) underwater test.

Concern over bone-seeking radionuclides and the first mitigating steps

Prior to 1950, only limited consideration was given to the health impacts of worldwide dispersion of radioactivity from nuclear testing. Public protests in the 1950s and concerns about the radionuclide strontium-90 (see Chart 1) and its effect on mother’s milk and babies’ teeth were instrumental in the conclusion of the Partial Test Ban Treaty (PTBT) in 1963. The PTBT banned nuclear testing in the atmosphere, outer space and under water, but not underground, and was signed by the United States, the Soviet Union and the United Kingdom. However, France and China did not sign and conducted atmospheric tests until 1974 and 1980 respectively.

Although underground testing mitigated the problem of radiation doses from short-lived radionuclides such as iodine-131, large amounts of plutonium, iodine-129 and caesium-135 (See Chart 1) were released underground. In addition, exposure occurred beyond the test site if radioactive gases leaked or were vented.

Scientist arranging mice for radiation exposure investigations around 1944. (While conducting these experiments, the carcinogenesis of urethane was discovered).

Gradual increase in knowledge about dangers of radiation exposure

Over the past century, there has been a gradual accumulation of knowledge about the hazards of radioactivity. It was recognized early on that exposure to a sufficient radiation dosage could cause injuries to internal organs, as well as to the skin and the eyes.

According to the 2000 Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the UN General Assembly, radiation exposure can damage living cells, killing some and modifying others. The destruction of a sufficient number of cells will inflict noticeable harm on organs which may result in death. If altered cells are not repaired, the resulting modification will be passed on to further cells and may eventually lead to cancer. Modified cells that transmit hereditary information to the offspring of the exposed individual might cause hereditary disorders. Vegetation can also be contaminated when fallout is directly deposited on external surfaces of plants and absorbed through the roots. Furthermore, people can be exposed when they eat meat and milk from animals grazing on contaminated vegetation.

Radiation exposure has been associated with most forms of leukaemia, as well as cancer of the thyroid, lung and breast.

girl who lost her hair after being exposed to radiation from the bomb dropped on Hiroshima on 6 August 1945.

Studies reveal link between nuclear weapon testing and cancer

The American Cancer Society’s website explains how ionizing radiation, which refers to several types of particles and rays given off by radioactive materials, is one of the few scientifically proven carcinogens in human beings. Radiation exposure has been associated with most forms of leukaemia, as well as cancer of the thyroid, lung and breast. The time that may elapse between radiation exposure and cancer development can be anything between 10 and 40 years. Degrees of exposure regarded as tolerable in the 1950s are now recognized internationally as unsafe.

An article featured in Volume 94 of American Scientist onFallout from Nuclear Weapons Tests and Cancer Risksstates that a number of studies of biological samples (including bone, thyroid glands and other tissues) have provided increasing proof that specific radionuclides in fallout are implicated in fallout-related cancers.

It is difficult to assess the number of deaths that might be attributed to radiation exposure from nuclear testing. Some studies and evaluations, including an assessment by Arjun Makhijani on the health effects of nuclear weapon complexes, estimate that cancer fatalities due to the global radiation doses from the atmospheric nuclear testing programmes of the five nuclear-weapon States amount to hundreds of thousands. A 1991 study by the International Physicians for the Prevention of Nuclear War (IPPNW)estimated that the radiation and radioactive materials from atmospheric testing taken in by people up until the year 2000 would cause 430,000 cancer deaths, some of which had already occurred by the time the results were published. The study predicted that roughly 2.4 million people could eventually die from cancer as a result of atmospheric testing.

CHART 1 – EFFECTS OF RADIONUCLIDES

Radionuclide Half-life* Health hazards
Xenon
(Xe)
6.7 hours Inhalation in excessive concentrations can result in dizziness, nausea, vomiting, loss of consciousness, and death. At low oxygen concentrations, unconsciousness and death may occur in seconds without warning.
Americium-241
(241Am)
432 years Moves rapidly through the body after ingestion and is concentrated within the bones for a long period of time. During this storage americium will slowly decay and release radioactive particles and rays. These rays can cause alteration of genetic materials and bone cancer.
Iodine-131
(131I)
8 days When present in high levels in the environment from radioactive fallout, I-131 can be absorbed through contaminated food. It also accumulates in the thyroid gland, where it can destroy all or part of the thyroid. May cause damage to the thyroid as it decays. Thyroid cancer may occur.
Caesium-137
(137Cs)
30 years After entering the body, caesium is distributed fairly uniformly through the body, with higher concentration in muscle tissue and lower concentration in bones. Can cause gonadal irradiation and genetic damage.
Krypton-85
(85Kr)
10.76 years Inhalation in excessive concentrations can result in dizziness, nausea, vomiting, loss of consciousness, and death.
Strontium-90
(90Sr)
28 years A small amount of strontium 90 is deposited in bones and bone marrow, blood and soft tissues when ingested. Can cause bone cancer, cancer of nearby tissues, and leukaemia.
Plutonium-239
(239Pu)
24,400 years Released when a plutonium weapon is exploded. Ingestion of even a miniscule quantity is a serious health hazard and can cause lung, bone, and liver cancer. The highest doses are to the lungs, the bone marrow, bone surfaces, and liver.
Tritium
(3H)
12 years Easily ingested. Can be inhaled as a gas in the air or absorbed through the skin. Enters soft tissues and organs. Exposure to tritium increases the risk of developing cancer. Beta radiation emitted by tritium can cause lung cancer.

* ( i.e. amount of time it takes for half of the quantity of a radioactive material to decay)

Marie Curie won the Nobel Prize in chemistry in 1911 for her discovery of the elements radium and polonium. The curie unit is named after Marie and Pierre Curie, who conducted pioneering research on radiation.

Measuring radiation doses and biological risks

Scientists use different terms when measuring radiation. The terms can either refer to radiation from a radioactive source, the radiation dose absorbed by a person, or the risk that a person will suffer health effects from exposure to radiation. When a person is exposed to radiation, energy is deposited in the body’s tissues. The amount of energy deposited per unit of weight of human tissue is called the absorbed dose. This is measured using the rad or the SI Gy. The rad, which stands for radiation absorbed dose, has largely been replaced by the Gy. One Gy is equal to 100 rad.

The curie (symbol Ci) is a unit of radioactivity. It has largely been replaced by the Becquerel, which is the unit of radioactivity. One Becquerel is defined as the number of atoms which decay per second in a sample. The curie unit is named after Marie and Pierre Curie, who conducted pioneering research on radiation.

A person’s biological risk (i.e. the risk that a person will suffer health effects from an exposure to radiation) is measured using the conventional unit rem or the SI unit Sv.

CHART 2. EFFECTS OF DIFFERENT LEVELS OF RADIATION

Radiation dose in rems Health impact
5-20 Possible chromosomal damage.
20-100 Temporary reduction in number of white blood cells. Mild nausea and vomiting. Loss of appetite. Fatigue, which may last up to four weeks. Greater susceptibility to infection. Greater long-term risk of leukaemia and lymphoma is possible.
100-200 Mild radiation sickness within a few hours: vomiting, diarrhea, fatigue; reduced resistance to infection. Hair loss. In sufficient amounts, I-131 can destroy all or part of the thyroid gland, leading to thyroid abnormalities or cancer. Temporary male sterility.
200-300 Serious radiation sickness effects as in 100-200 rem. Body cells that divide rapidly can also be destroyed. These include blood cells, gastrointestinal tract cells, reproductive cells, and hair cells. DNA of surviving cells is also damaged.
300-400 Serious radiation sickness. Bone marrow and intestine destruction. Haemorraging of the mouth.
400-1000 Acute illness, possible heart failure. Bone marrow almost completely destroyed. Permanent female sterility probable.
1000-5000 Acute illness, nerve cells and small blood vessels are destroyed. Death can occur in days.

 

How Quantum Computers and Machine Learning Will Revolutionize Big Data (Wired)

BY JENNIFER OUELLETTE, QUANTA MAGAZINE

10.14.13

Image: infocux Technologies/Flickr

When subatomic particles smash together at the Large Hadron Collider in Switzerland, they create showers of new particles whose signatures are recorded by four detectors. The LHC captures 5 trillion bits of data — more information than all of the world’s libraries combined — every second. After the judicious application of filtering algorithms, more than 99 percent of those data are discarded, but the four experiments still produce a whopping 25 petabytes (25×1015 bytes) of data per year that must be stored and analyzed. That is a scale far beyond the computing resources of any single facility, so the LHC scientists rely on a vast computing grid of 160 data centers around the world, a distributed network that is capable of transferring as much as 10 gigabytes per second at peak performance.

The LHC’s approach to its big data problem reflects just how dramatically the nature of computing has changed over the last decade. Since Intel co-founder Gordon E. Moore first defined it in 1965, the so-called Moore’s law — which predicts that the number of transistors on integrated circuits will double every two years — has dominated the computer industry. While that growth rate has proved remarkably resilient, for now, at least, “Moore’s law has basically crapped out; the transistors have gotten as small as people know how to make them economically with existing technologies,” said Scott Aaronson, a theoretical computer scientist at the Massachusetts Institute of Technology.

Instead, since 2005, many of the gains in computing power have come from adding more parallelism via multiple cores, with multiple levels of memory. The preferred architecture no longer features a single central processing unit (CPU) augmented with random access memory (RAM) and a hard drive for long-term storage. Even the big, centralized parallel supercomputers that dominated the 1980s and 1990s are giving way to distributed data centers and cloud computing, often networked across many organizations and vast geographical distances.

These days, “People talk about a computing fabric,” said Stanford University electrical engineerStephen Boyd. These changes in computer architecture translate into the need for a different computational approach when it comes to handling big data, which is not only grander in scope than the large data sets of yore but also intrinsically different from them.

The demand for ever-faster processors, while important, isn’t the primary focus anymore. “Processing speed has been completely irrelevant for five years,” Boyd said. “The challenge is not how to solve problems with a single, ultra-fast processor, but how to solve them with 100,000 slower processors.” Aaronson points out that many problems in big data can’t be adequately addressed by simply adding more parallel processing. These problems are “more sequential, where each step depends on the outcome of the preceding step,” he said. “Sometimes, you can split up the work among a bunch of processors, but other times, that’s harder to do.” And often the software isn’t written to take full advantage of the extra processors. “If you hire 20 people to do something, will it happen 20 times faster?” Aaronson said. “Usually not.”

Researchers also face challenges in integrating very differently structured data sets, as well as the difficulty of moving large amounts of data efficiently through a highly distributed network.

Those issues will become more pronounced as the size and complexity of data sets continue to grow faster than computing resources, according to California Institute of Technology physicist Harvey Newman, whose team developed the LHC’s grid of data centers and trans-Atlantic network. He estimates that if current trends hold, the computational needs of big data analysis will place considerable strain on the computing fabric. “It requires us to think about a different kind of system,” he said.

Memory and Movement

Emmanuel Candes, an applied mathematician at Stanford University, was once able to crunch big data problems on his desktop computer. But last year, when he joined a collaboration of radiologists developing dynamic magnetic resonance imaging — whereby one could record a patient’s heartbeat in real time using advanced algorithms to create high-resolution videos from limited MRI measurements — he found that the data no longer fit into his computer’s memory, making it difficult to perform the necessary analysis.

Addressing the storage-capacity challenges of big data is not simply a matter of building more memory, which has never been more plentiful. It is also about managing the movement of data. That’s because, increasingly, the desired data is no longer at people’s fingertips, stored in a single computer; it is distributed across multiple computers in a large data center or even in the “cloud.”There is a hierarchy to data storage, ranging from the slowest, cheapest and most abundant memory to the fastest and most expensive, with the least available space. At the bottom of this hierarchy is so-called “slow memory” such as hard drives and flash drives, the cost of which continues to drop. There is more space on hard drives, compared to the other kinds of memory, but saving and retrieving the data takes longer. Next up this ladder comes RAM, which is must faster than slow memory but offers less space is more expensive. Then there is cache memory — another trade-off of space and price in exchange for faster retrieval speeds — and finally the registers on the microchip itself, which are the fastest of all but the priciest to build, with the least available space. If memory storage were like real estate, a hard drive would be a sprawling upstate farm, RAM would be a medium-sized house in the suburbs, cache memory would be a townhouse on the outskirts of a big city, and the register memory would be a tiny studio in a prime urban location.

Longer commutes for stored data translate into processing delays. “When computers are slow today, it’s not because of the microprocessor,” Aaronson said. “The microprocessor is just treading water waiting for the disk to come back with the data.” Big data researchers prefer to minimize how much data must be moved back and forth from slow memory to fast memory. The problem is exacerbated when the data is distributed across a network or in the cloud, because it takes even longer to move the data back and forth, depending on bandwidth capacity, so that it can be analyzed.

One possible solution to this dilemma is to embrace the new paradigm. In addition to distributed storage, why not analyze the data in a distributed way as well, with each unit (or node) in a network of computers performing a small piece of a computation? Each partial solution is then integrated to find the full result. This approach is similar in concept to the LHC’s, in which one complete copy of the raw data (after filtering) is stored at the CERN research facility in Switzerland that is home to the collider. A second copy is divided into batches that are then distributed to data centers around the world. Each center analyzes its chunk of data and transmits the results to regional computers before moving on to the next batch.

Alon Halevy, a computer scientist at Google, says the biggest breakthroughs in big data are likely to come from data integration.Image: Peter DaSilva for Quanta Magazine

Boyd’s system is based on so-calledconsensus algorithms. “It’s a mathematical optimization problem,” he said of the algorithms. “You are using past data to train the model in hopes that it will work on future data.” Such algorithms are useful for creating an effective SPAM filter, for example, or for detecting fraudulent bank transactions.

This can be done on a single computer, with all the data in one place. Machine learning typically uses many processors, each handling a little bit of the problem. But when the problem becomes too large for a single machine, a consensus optimization approach might work better, in which the data set is chopped into bits and distributed across 1,000 “agents” that analyze their bit of data and each produce a model based on the data they have processed. The key is to require a critical condition to be met: although each agent’s model can be different, all the models must agree in the end — hence the term “consensus algorithms.”

The process by which 1,000 individual agents arrive at a consensus model is similar in concept to the Mechanical Turk crowd-sourcing methodology employed by Amazon — with a twist. With the Mechanical Turk, a person or a business can post a simple task, such as determining which photographs contain a cat, and ask the crowd to complete the task in exchange for gift certificates that can be redeemed for Amazon products, or for cash awards that can be transferred to a personal bank account. It may seem trivial to the human user, but the program learns from this feedback, aggregating all the individual responses into its working model, so it can make better predictions in the future.

In Boyd’s system, the process is iterative, creating a feedback loop. The initial consensus is shared with all the agents, which update their models in light of the new information and reach a second consensus, and so on. The process repeats until all the agents agree. Using this kind of distributed optimization approach significantly cuts down on how much data needs to be transferred at any one time.

The Quantum Question

Late one night, during a swanky Napa Valley conference last year, MIT physicist Seth Lloyd found himself soaking in a hot tub across from Google’s Sergey Brin and Larry Page — any aspiring technology entrepreneur’s dream scenario. Lloyd made his pitch, proposing a quantum version of Google’s search engine whereby users could make queries and receive results without Google knowing which questions were asked. The men were intrigued. But after conferring with their business manager the next day, Brin and Page informed Lloyd that his scheme went against their business plan. “They want to know everything about everybody who uses their products and services,” he joked.

It is easy to grasp why Google might be interested in a quantum computer capable of rapidly searching enormous data sets. A quantum computer, in principle, could offer enormous increases in processing power, running algorithms significantly faster than a classical (non-quantum) machine for certain problems. Indeed, the company just purchased a reportedly $15 million prototype from a Canadian firm called D-Wave Systems, although the jury is still out on whether D-Wave’s product is truly quantum.

“This is not about trying all the possible answers in parallel. It is fundamentally different from parallel processing,” said Aaronson. Whereas a classical computer stores information as bits that can be either 0s or 1s, a quantum computer could exploit an unusual property: the superposition of states. If you flip a regular coin, it will land on heads or tails. There is zero probability that it will be both heads and tails. But if it is a quantum coin, technically, it exists in an indeterminate state of both heads and tails until you look to see the outcome.

A true quantum computer could encode information in so-called qubits that can be 0 and 1 at the same time. Doing so could reduce the time required to solve a difficult problem that would otherwise take several years of computation to mere seconds. But that is easier said than done, not least because such a device would be highly sensitive to outside interference: The slightest perturbation would be equivalent to looking to see if the coin landed heads or tails, and thus undo the superposition.

Data from a seemingly simple query about coffee production across the globe can be surprisingly difficult to integrate. Image: Peter DaSilva for Quanta Magazine

However, Aaronson cautions against placing too much hope in quantum computing to solve big data’s computational challenges, insisting that if and when quantum computers become practical, they will be best suited to very specific tasks, most notably to simulate quantum mechanical systems or to factor large numbers to break codes in classical cryptography. Yet there is one way that quantum computing might be able to assist big data: by searching very large, unsorted data sets — for example, a phone directory in which the names are arranged randomly instead of alphabetically.

It is certainly possible to do so with sheer brute force, using a massively parallel computer to comb through every record. But a quantum computer could accomplish the task in a fraction of the time. That is the thinking behind Grover’s algorithm, which was devised by Bell Labs’ Lov Grover in 1996. However, “to really make it work, you’d need a quantum memory that can be accessed in a quantum superposition,” Aaronson said, but it would need to do so in such a way that the very act of accessing the memory didn’t destroy the superposition, “and that is tricky as hell.”

In short, you need quantum RAM (Q-RAM), and Lloyd has developed a conceptual prototype, along with an accompanying program he calls a Q-App (pronounced “quapp”) targeted to machine learning. He thinks his system could find patterns within data without actually looking at any individual records, thereby preserving the quantum superposition (and the users’ privacy). “You can effectively access all billion items in your database at the same time,” he explained, adding that “you’re not accessing any one of them, you’re accessing common features of all of them.”

For example, if there is ever a giant database storing the genome of every human being on Earth, “you could search for common patterns among different genes” using Lloyd’s quantum algorithm, with Q-RAM and a small 70-qubit quantum processor while still protecting the privacy of the population, Lloyd said. The person doing the search would have access to only a tiny fraction of the individual records, he said, and the search could be done in a short period of time. With the cost of sequencing human genomes dropping and commercial genotyping services rising, it is quite possible that such a database might one day exist, Lloyd said. It could be the ultimate big data set, considering that a single genome is equivalent to 6 billion bits.

Lloyd thinks quantum computing could work well for powerhouse machine-learning algorithms capable of spotting patterns in huge data sets — determining what clusters of data are associated with a keyword, for example, or what pieces of data are similar to one another in some way. “It turns out that many machine-learning algorithms actually work quite nicely in quantum computers, assuming you have a big enough Q-RAM,” he said. “These are exactly the kinds of mathematical problems people try to solve, and we think we could do very well with the quantum version of that.”

The Future Is Integration

“No matter how much you speed up the computers or the way you put computers together, the real issues are at the data level.”

Google’s Alon Halevy believes that the real breakthroughs in big data analysis are likely to come from integration — specifically, integrating across very different data sets. “No matter how much you speed up the computers or the way you put computers together, the real issues are at the data level,” he said. For example, a raw data set could include thousands of different tables scattered around the Web, each one listing crime rates in New York, but each may use different terminology and column headers, known as “schema.” A header of “New York” can describe the state, the five boroughs of New York City, or just Manhattan. You must understand the relationship between the schemas before the data in all those tables can be integrated.

That, in turn, requires breakthroughs in techniques to analyze the semantics of natural language. It is one of the toughest problems in artificial intelligence — if your machine-learning algorithm aspires to perfect understanding of nearly every word. But what if your algorithm needs to understand only enough of the surrounding text to determine whether, for example, a table includes data on coffee production in various countries so that it can then integrate the table with other, similar tables into one common data set? According to Halevy, a researcher could first use a coarse-grained algorithm to parse the underlying semantics of the data as best it could and then adopt a crowd-sourcing approach like a Mechanical Turk to refine the model further through human input. “The humans are training the system without realizing it, and then the system can answer many more questions based on what it has learned,” he said.

Chris Mattmann, a senior computer scientist at NASA’s Jet Propulsion Laboratory and director at theApache Software Foundation, faces just such a complicated scenario with a research project that seeks to integrate two different sources of climate information: remote-sensing observations of the Earth made by satellite instrumentation and computer-simulated climate model outputs. The Intergovernmental Panel on Climate Change would like to be able to compare the various climate models against the hard remote-sensing data to determine which models provide the best fit. But each of those sources stores data in different formats, and there are many different versions of those formats.

Many researchers emphasize the need to develop a broad spectrum of flexible tools that can deal with many different kinds of data. For example, many users are shifting from traditional highly structured relational databases, broadly known as SQL, which represent data in a conventional tabular format, to a more flexible format dubbed NoSQL. “It can be as structured or unstructured as you need it to be,” said Matt LeMay, a product and communications consultant and the former head of consumer products at URL shortening and bookmarking service Bitly, which uses both SQL and NoSQL formats for data storage, depending on the application.

Mattmann cites an Apache software program called Tika that allows the user to integrate data across 1,200 of the most common file formats. But in some cases, some human intervention is still required. Ultimately, Mattmann would like to fully automate this process via intelligent software that can integrate differently structured data sets, much like the Babel Fish in Douglas Adams’ “Hitchhiker’s Guide to the Galaxy” book series enabled someone to understand any language.

Integration across data sets will also require a well-coordinated distributed network system comparable to the one conceived of by Newman’s group at Caltech for the LHC, which monitors tens of thousands of processors and more than 10 major network links. Newman foresees a computational future for big data that relies on this type of automation through well-coordinated armies of intelligent agents, that track the movement of data from one point in the network to another, identifying bottlenecks and scheduling processing tasks. Each might only record what is happening locally but would share the information in such a way as to shed light on the network’s global situation.

“Thousands of agents at different levels are coordinating to help human beings understand what’s going on in a complex and very distributed system,” Newman said. The scale would be even greater in the future, when there would be billions of such intelligent agents, or actors, making up a vast global distributed intelligent entity. “It’s the ability to create those things and have them work on one’s behalf that will reduce the complexity of these operational problems,” he said. “At a certain point, when there’s a complicated problem in such a system, no set of human beings can really understand it all and have access to all the information.”

Predicting the Future Could Improve Remote-Control of Space Robots (Wired)

BY ADAM MANN

10.15.13

A new system could make space exploration robots faster and more efficient by predicting where they will be in the very near future.

The engineers behind the program hope to overcome a particular snarl affecting our probes out in the solar system: that pesky delay caused by the speed of light. Any commands sent to a robot on a distant body take a certain amount of time to travel and won’t be executed for a while. By building a model of the terrain surrounding a rover and providing an interface that lets operators forecast the how the probe will move around within it, engineer can identify potential obstacles and make decisions nearer to real-time.

“You’re reacting quickly, and the rover is staying active more of the time,” said computer scientist Jeff Norris, who leads mission operation innovations at the Jet Propulsion Laboratory’s Ops Lab.

As an example, the distance between Earth and Mars creates round-trip lags of up to 40 minutes. Nowadays, engineers send a long string of commands once a day to robots like NASA’s Curiosity rover. These get executed but then the rover has to stop and wait until the next instructions are beamed down.

Because space exploration robots are multi-million or even multi-billion-dollar machines, they have to work very carefully. One day’s commands might tell Curiosity to drive up to a rock. It will then check that it has gotten close enough. Then, the following day, if will be instructed to place its arm on that rock. Later on, it might be directed to drill into or probe this rock with its instruments. While safe, this method is very inefficient.

“When we only send commands once a day, we’re not dealing with 10- or 20-minute delays. We’re dealing with a 24-hour round trip,” said Norris.

Norris’ lab wants to make the speed and productivity of distant probes better. Their interface simulates more or less where a robot would be given a particular time delay. This is represented by a small ghostly machine — called the “committed state” — moving just ahead of a rover. The ghosted robot is the software’s best guess of where the probe would end up if operators hit the emergency stop button right then.

By looking slightly into the future, the interface allows a rover driver to update decisions and commands at a much faster rate than is currently possible. Say a robot on Mars is commanded to drive forward 100 meters. But halfway there, its sensors notice an interesting rock that scientists want to investigate. Rather than waiting for the rover to finish its drive and then commanding it to go back, this new interface would give operators the ability to write and rewrite their directions on the fly.

The simulation can’t know every detail around a probe and so provides a small predictive envelope as to where the robot might be. Different terrains have different uncertainties.

“If you’re on loose sand, that might be different than hard rock,” said software engineer Alexander Menzies, who works on the interface.

Menzies added that when they tested the interface, users had an “almost game-like experience” trying to optimize commands for a robot. He designed an actual video game where participants were given points for commanding a time-delayed robot through a slalom-like terrain. (Norris lamented that he had the highest score on that game until the last day of testing, when Menzies beat him.)

The team thinks that aspects of this new interface could start to be used in the near future, perhaps even with the current Mars rovers Curiosity and Opportunity. At this point, though, Mars operations are limited by bandwidth. Because there are only a few communicating satellites in orbit on the Red Planet, commands can only be sent a few times a day, reducing a lot of the efficiency that would be gained from this new system. But operations on the moon or a potential asteroid capture and exploration mission – such as the one NASA is currently planning – would likely be in more constant communication with Earth, providing even faster and more efficient operations that could take advantage of this new time-delay-reducing system.

Video: OPSLabJPL/Youtube