Arquivo da categoria: Uncategorized

Fukushima Forever (Huff Post)

Charles Perrow

Posted: 09/20/2013 2:49 pm

Recent disclosures of tons of radioactive water from the damaged Fukushima reactors spilling into the ocean are just the latest evidence of the continuing incompetence of the Japanese utility, TEPCO. The announcement that the Japanese government will step in is also not reassuring since it was the Japanese government that failed to regulate the utility for decades. But, bad as it is, the current contamination of the ocean should be the least of our worries. The radioactive poisons are expected to form a plume that will be carried by currents to coast of North America. But the effects will be small, adding an unfortunate bit to our background radiation. Fish swimming through the plume will be affected, but we can avoid eating them.

Much more serious is the danger that the spent fuel rod pool at the top of the nuclear plant number four will collapse in a storm or an earthquake, or in a failed attempt to carefully remove each of the 1,535 rods and safely transport them to the common storage pool 50 meters away. Conditions in the unit 4 pool, 100 feet from the ground, are perilous, and if any two of the rods touch it could cause a nuclear reaction that would be uncontrollable. The radiation emitted from all these rods, if they are not continually cool and kept separate, would require the evacuation of surrounding areas including Tokyo. Because of the radiation at the site the 6,375 rods in the common storage pool could not be continuously cooled; they would fission and all of humanity will be threatened, for thousands of years.

Fukushima is just the latest episode in a dangerous dance with radiation that has been going on for 68 years. Since the atomic bombing of Nagasaki and Hiroshima in 1945 we have repeatedly let loose plutonium and other radioactive substances on our planet, and authorities have repeatedly denied or trivialized their dangers. The authorities include national governments (the U.S., Japan, the Soviet Union/ Russia, England, France and Germany); the worldwide nuclear power industry; and some scientists both in and outside of these governments and the nuclear power industry. Denials and trivialization have continued with Fukushima. (Documentation of the following observations can be found in my piece in the Bulletin of the Atomic Scientists, upon which this article is based.) (Perrow 2013)

In 1945, shortly after the bombing of two Japanese cities, the New York Times headline read: “Survey Rules Out Nagasaki Dangers”; soon after the 2011 Fukushima disaster it read “Experts Foresee No Detectable Health Impact from Fukushima Radiation.” In between these two we had experts reassuring us about the nuclear bomb tests, plutonium plant disasters at Windscale in northern England and Chelyabinsk in the Ural Mountains, and the nuclear power plant accidents at Three Mile Island in the United States and Chernobyl in what is now Ukraine, as well as the normal operation of nuclear power plants.

Initially the U.S. Government denied that low-level radiation experienced by thousands of Japanese people in and near the two cities was dangerous. In 1953, the newly formed Atomic Energy Commission insisted that low-level exposure to radiation “can be continued indefinitely without any detectable bodily change.” Biologists and other scientists took exception to this, and a 1956 report by the National Academy of Scientists, examining data from Japan and from residents of the Marshall Islands exposed to nuclear test fallout, successfully established that all radiation was harmful. The Atomic Energy Commission then promoted a statistical or population approach that minimized the danger: the damage would be so small that it would hardly be detectable in a large population and could be due to any number of other causes. Nevertheless, the Radiation Research Foundation detected it in 1,900 excess deaths among the Japanese exposed to the two bombs. (The Department of Homeland Security estimated only 430 cancer deaths).

Besides the uproar about the worldwide fallout from testing nuclear weapons, another problem with nuclear fission soon emerged: a fire in a British plant making plutonium for nuclear weapons sent radioactive material over a large area of Cumbria, resulting in an estimated 240 premature cancer deaths, though the link is still disputed. The event was not made public and no evacuations were ordered. Also kept secret, for over 25 years, was a much larger explosion and fire, also in 1957, at the Chelyabinsk nuclear weapons processing plant in the eastern Ural Mountains of the Soviet Union. One estimate is that 272,000 people were irradiated; lakes and streams were contaminated; 7,500 people were evacuated; and some areas still are uninhabitable. The CIA knew of it immediately, but they too kept it secret. If a plutonium plant could do that much damage it would be a powerful argument for not building nuclear weapons.

Powerful arguments were needed, due to the fallout from the fallout from bombs and tests. Peaceful use became the mantra. Project Plowshares, initiated in 1958, conducted 27 “peaceful nuclear explosions” from 1961 until the costs as well as public pressure from unforeseen consequences ended the program in 1975. The Chairman of the Atomic Energy Commission indicated Plowshares’ close relationship to the increasing opposition to nuclear weapons, saying that peaceful applications of nuclear explosives would “create a climate of world opinion that is more favorable to weapons development and tests” (emphasis supplied). A Pentagon official was equally blunt, saying in 1953, “The atomic bomb will be accepted far more readily if at the same time atomic energy is being used for constructive ends.” The minutes of a National Security Council in 1953 spoke of destroying the taboo associated with nuclear weapons and “dissipating” the feeling that we could not use an A-bomb.

More useful than peaceful nuclear explosions were nuclear power plants, which would produce the plutonium necessary for atomic weapons as well as legitimating them. Nuclear power plants, the daughter of the weapons program — actually its “bad seed” –f was born and soon saw first fruit with the1979 Three Mile Island accident. Increases in cancer were found but the Columbia University study declared that the level of radiation from TMI was too low to have caused them, and the “stress” hypothesis made its first appearance as the explanation for rises in cancer. Another university study disputed this, arguing that radiation caused the increase, and since a victim suit was involved, it went to a Federal judge who ruled in favor of stress. A third, larger study found “slight” increases in cancer mortality and increased risk breast and other cancers, but found “no consistent evidence” of a “significant impact.” Indeed, it would be hard to find such an impact when so many other things can cause cancer, and it is so widespread. Indeed, since stress can cause it, there is ample ambiguity that can be mobilized to defend nuclear power plants.

Ambiguity was mobilized by the Soviet Union after the 1987 Chernobyl disaster. Medical studies by Russian scientists were suppressed, and doctors were told not to use the designation of leukemia in health reports. Only after a few years had elapsed did any serious studies acknowledge that the radiation was serious. The Soviet Union forcefully argued that the large drops in life expectancy in the affected areas were due to not just stress, but lifestyle changes. The International Atomic Energy Association (IAEA), charged with both promoting nuclear power and helping make it safe, agreed, and mentioned such things as obesity, smoking, and even unprotected sex, arguing that the affected population should not be treated as “victims” but as “survivors.” The count of premature deaths has varied widely, ranging from 4,000 in the contaminated areas of Ukraine, Belarus and Russia from UN agencies, while Greenpeace puts it at 200,000. We also have the controversial worldwide estimate of 985,000 from Russian scientists with access to thousands of publications from the affected regions.

Even when nuclear power plants are running normally they are expected to release some radiation, but so little as to be harmless. Numerous studies have now challenged that. When eight U.S. nuclear plants in the U.S. were closed in 1987 they provided the opportunity for a field test. Two years later strontium-90 levels in local milk declined sharply, as did birth defects and death rates of infants within 40 miles of the plants. A 2007 study of all German nuclear power plants saw childhood leukemia for children living less than 3 miles from the plants more than double, but the researchers held that the plants could not cause it because their radiation levels were so low. Similar results were found for a French study, with a similar conclusion; it could not be low-level radiation, though they had no other explanation. A meta-study published in 2007 of 136 reactor sites in seven countries, extended to include children up to age 9, found childhood leukemia increases of 14 percent to 21 percent.

Epidemiological studies of children and adults living near the Fukushima Daiichi nuclear plant will face the same obstacles as earlier studies. About 40 percent of the aging population of Japan will die of some form of cancer; how can one be sure it was not caused by one of the multiple other causes? It took decades for the effects of the atomic bombs and Chernobyl to clearly emblazon the word “CANCER” on these events. Almost all scientists finally agree that the dose effects are linear, that is, any radiation added to natural background radiation, even low-levels of radiation, is harmful. But how harmful?

University professors have declared that the health effects of Fukushima are “negligible,” will cause “close to no deaths,” and that much of the damage was “really psychological.” Extensive and expensive follow-up on citizens from the Fukushima area, the experts say, is not worth it. There is doubt a direct link will ever be definitively made, one expert said. The head of the U.S. National Council on Radiation Protection and Measurements, said: “There’s no opportunity for conducting epidemiological studies that have any chance of success….The doses are just too low.” We have heard this in 1945, at TMi, at Chernobyl, and for normally running power plants. It is surprising that respected scientists refuse to make another test of such an important null hypothesis: that there are no discernible effects of low-level radiation.

Not surprisingly, a nuclear power trade group announced shortly after the March, 2011 meltdown at Fukushima (the meltdown started with the earthquake, well before the tsunami hit), that “no health effects are expected” as a result of the events. UN agencies agree with them and the U.S. Council. The leading UN organization on the effects of radiation concluded “Radiation exposure following the nuclear accident at Fukushima-Daiichi did not cause any immediate health effects. It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers.” The World Health Organization stated that while people in the United States receive about 6.5 millisieverts per year from sources including background radiation and medical procedures, only two Japanese communities had effective dose rates of 10 to 50 millisieverts, a bit more than normal.

However, other data contradict the WHO and other UN agencies. The Japanese science and technology ministry (MEXT) indicated that a child in one community would have an exposure 100 times the natural background radiation in Japan, rather than a bit more than normal. A hospital reported that more than half of the 527 children examined six months after the disaster had internal exposure to cesium-137, an isotope that poses great risk to human health. A French radiological institute found ambient dose rates 20 to 40 times that of background radiation and in the most contaminated areas the rates were even 10 times those elevated dose rates. The Institute predicts and excess cancer rate of 2 percent in the first year alone. Experts not associated with the nuclear industry or the UN agencies currently have estimated from 1,000 to 3,000 cancer deaths. Nearly two years after the disaster the WHO was still declaring that any increase in human disease “is likely to remain below detectable levels.” (It is worth noting that the WHO still only releases reports on radiation impacts in consultation with the International Atomic Energy Agency.)

In March 2013, the Fukushima Prefecture Health Management Survey reported examining 133,000 children using new, highly sensitive ultrasound equipment. The survey found that 41 percent of the children examined had cysts of up to 2 centimeters in size and lumps measuring up to 5 millimeters on their thyroid glands, presumably from inhaled and ingested radioactive iodine. However, as we might expect from our chronicle, the survey found no cause for alarm because the cysts and lumps were too small to warrant further examination. The defense ministry also conducted an ultrasound examination of children from three other prefectures distant from Fukushima and found somewhat higher percentages of small cysts and lumps, adding to the argument that radiation was not the cause. But others point out that radiation effects would not be expected to be limited to what is designated as the contaminated area; that these cysts and lumps, signs of possible thyroid cancer, have appeared alarmingly soon after exposure; that they should be followed up since it takes a few years for cancer to show up and thyroid cancer is rare in children; and that a control group far from Japan should be tested with the same ultrasound technics.

The denial that Fukushima has any significant health impacts echoes the denials of the atomic bomb effects in 1945; the secrecy surrounding Windscale and Chelyabinsk; the studies suggesting that the fallout from Three Mile Island was, in fact, serious; and the multiple denials regarding Chernobyl (that it happened, that it was serious, and that it is still serious).

As of June, 2013, according to a report in The Japan Times, 12 of 175,499 children tested had tested positive for possible thyroid cancer, and 15 more were deemed at high risk of developing the disease. For a disease that is rare, this is high number. Meanwhile, the U.S. government is still trying to get us to ignore the bad seed. June 2012, the U.S. Department of Energy granted $1.7 million to the Massachusetts Institute of Technology to address the “difficulties in gaining the broad social acceptance” of nuclear power.

Perrow, Charles. 2013. “Nuclear denial: From Hiroshima to Fukushima.” Bulletin of Atomic Scientists 69(5):56-67.

Mais sobre a polêmica dos animais de laboratório (25/10/2013)

Testes em animais: questão humanitária (Jornal da Ciência)

25 de outubro de 2013

Comunidade científica defende a experimentação com animais e rechaça invasão do Instituto Royal

Primeiro pesquisador a desenvolver uma vacina antirrábica, Louis Pasteur (1827-1895) contribuiu enormemente na validação de métodos científicos com testes em animais; Carlos Chagas (1878-1934) fez experiências com saguis e insetos em seus estudos sobre a malária e na descoberta da doença de Chagas; a vacina contra a poliomielite só foi possível graças a pesquisas que Albert Sabin (1906-1993) fez em dezenas de macacos. Em comum, esses três cientistas têm, além do renome internacional, o fato de terem entrado para a história pela grande contribuição ao avanço da ciência para o benefício da humanidade.

Ao contrário, o clamor causado pela invasão do Instituto Royal, em São Roque, a 59 quilômetros de São Paulo, na madrugada do dia 18 de outubro, encara pesquisadores da mesma forma que vê torturadores ou traficantes de animais. Experimentação científica não é farra do boi, briga de galo ou tourada para que cientistas sejam tratados pela opinião pública como criminosos.

Durante a invasão, ativistas contrários à utilização de animais em pesquisas científicas levaram do instituto 178 cães da raça beagle e sete coelhos, deixando para trás centenas de ratos. “É um grande equívoco, irresponsabilidade e desconhecimento da realidade ir para a mídia afirmar que os animais não são mais necessários para a descoberta de novas vacinas, medicamentos e terapias”, alertou Renato Cordeiro, professor titular da Fiocruz (Fundação Oswaldo Cruz).

Em carta aberta divulgada no último dia 22, a Sociedade Brasileira para o Progresso da Ciência (SBPC) e a Academia Brasileira de Ciências (ABC) lembram da importância da experimentação com animais. “Na história da medicina mundial, descobertas fundamentais foram realizadas, milhões de mortes evitadas e expectativas de vida aumentadas, graças à utilização dos animais em pesquisas para a saúde humana e animal”, diz o texto assinado pelos presidentes das entidades, Helena Nader e Jacob Palis, respectivamente.

Renato Cordeiro citou algumas dessas descobertas: o controle de qualidade de vacinas contra a pólio, o sarampo, a difteria, o tétano, a hepatite, a febre amarela e a meningite foram possíveis a partir desse tipo de experimentação. “Testes com animais também foram essenciais para a descoberta de anestésicos, de antibióticos e dos anti-inflamatórios, de fármacos para o controle da hipertensão arterial e diabetes”, relacionou, lembrando ainda de medicamentos para controlar a dor, a asma, para tratamento da ansiedade, dos antidepressivos, dos quimioterápicos, e dos hormônios anticoncepcionais.

Mais do que isso, os próprios animais têm sido beneficiados com os avanços da ciência no campo da terapêutica e cirurgia experimental. O pesquisador destaca as vacinas para a raiva, a cinomose, a febre aftosa, as pesquisas com o vírus da imunodeficiência felina, a tuberculose e várias doenças infecto-parasitárias.

Outra associação de pesquisadores, a Federação de Sociedades de Biologia Experimental (FeSBE), também divulgou manifesto expressando repúdio à invasão do Instituto Royal. De acordo com o texto, a sociedade quer que a qualidade de vida e a saúde animal evoluam no mesmo ritmo. “A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos”, diz o manifesto.

No mesmo sentido, Cordeiro cita trabalhos que estão sendo desenvolvidos em laboratórios brasileiros. “Eles visam à descoberta de vacinas e medicamentos para a malária, a Aids, dengue, tuberculose e outras doenças. Poderíamos dizer que os animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta”, argumenta.

Embora técnicas sofisticadas e equipamentos com alta tecnologia sejam necessários para algumas dessas pesquisas, o uso de animais de laboratório ainda é necessário para sua execução. “Em virtude da complexidade da célula biológica”, explica. Pesquisadores já desenvolvem esforços na busca de métodos alternativos para que algum dia os animais não sejam mais necessários nesse processo. No entanto, somente em alguns poucos casos, a Biologia Celular e Molecular oferece essa possibilidade. “Através de técnicas de cultura de tecidos e simulações computacionais”, esclarece.

Bem-estar animal na ciência

Num ponto, cientistas e ativistas invasores concordam: os animais não devem sofrer. Como ainda não existem métodos capazes de substituir o teste em animais em uma série de pesquisas fundamentais para o futuro da humanidade e para a saúde e sobrevivência do ser humano, o que está sendo feito em vários países é a regulamentação e a fiscalização dessas ações para minimizar o sofrimento dos bichos e avaliar a relevância dos estudos para a humanidade.

No Brasil, o responsável por estabelecer essas normas é o Conselho Nacional de Controle de Experimentação Animal (Concea), órgão integrante do Ministério da Ciência, Tecnologia e Inovação, do qual Cordeiro foi o primeiro coordenador. Hoje, o Concea é coordenado por Marcelo Morales, um dos secretários da SBPC e também da FeSBE.

Um grande marco ocorreu com a aprovação da Lei Arouca (11.794/2008), que regulamentou a criação e utilização de animais em atividades de ensino e pesquisa científica. Além de ter criado o Concea, a nova lei obrigou as Instituições de Pesquisa a constituírem uma Comissão de Ética no Uso de Animais (Ceua).

Essas comissões são componentes essenciais para aprovação, controle e vigilância das atividades de criação, ensino e pesquisa científica com animais, bem como para garantir o cumprimento das normas de controle da experimentação animal. “As Ceuas representam uma grande mudança de cultura na ciência e são formadas por médicos veterinários e biólogos, docentes e pesquisadores na área específica da pesquisa científica, e um representante de sociedades protetoras de animais legalmente constituídas e estabelecidas no país”, diz o ex-coordenador do Concea.

Esses representantes têm atuação importante nesse processo. “São profissionais muito qualificados, com formação em nível de doutorado, e têm dado excelentes contribuições nas discussões e deliberações do Concea”, avalia Cordeiro. Considerada a bíblia dos laboratórios de pesquisa, a Diretriz Brasileira para o Cuidado e a Utilização de Animais para fins Científicos e Didáticos (DBCA) foi citada pelo pesquisador como um dos recentes exemplos de competência dos membros do colegiado.

(Mario Nicoll / Jornal da Ciência)

*   *   *

Cirurgia em porco acirra debate do uso de cobaias em experimentos (Correio Braziliense)

Ativistas invadem aula de medicina da PUC-Campinas para gravar em vídeo uma cirurgia na qual os alunos treinam técnicas de traqueostomia em um suíno vivo. Ações desse tipo – como o furto de cães em laboratório paulista – preocupam os cientistas

Cinco dias depois que um grupo de ativistas invadiu o Instituto Royal, em São Roque (interior de São Paulo), para pegar 178 cachorros da raça beagle, em protesto contra o uso de animais como cobaias, mais um ato foi registrado no interior de São Paulo. Algumas pessoas interromperam uma aula prática do curso de medicina da Pontifícia Universidade Católica (PUC) de Campinas, em que seis porcos eram usados para ensinar a técnica da traqueostomia (abertura de um orifício na traqueia para permitir a respiração) aos alunos. Os ativistas filmaram os procedimentos e, depois, deixaram o local. Mais tarde, uma reação do governo à onda recente de manifestações veio em forma de nota, assinada pelo Conselho Nacional de Controle de Experimentação Animal (Concea), do Ministério da Ciência, Tecnologia e Inovação.

Marcelo Morales, coordenador do Concea, afirmou ao Correio que vê com ressalvas o panorama atual dos protestos. “É muito preocupante esse movimento obscurantista que vem ocorrendo no Brasil, cuja origem nós não sabemos, mas que mostra uma irracionalidade, um atraso muito grande”, afirmou o pesquisador, que também integra a diretoria da Sociedade Brasileira para o Progresso da Ciência (SBPC). De acordo com ele, é o desenvolvimento de remédios e tratamentos para doenças que está ameaçado diante do “radicalismo” dos que se intitulam defensores dos animais.

“Ativistas falam o que leem na internet, não têm base científica. É lorota a afirmação de que podemos usar métodos alternativos para a maior parte dos procedimentos. Eles são pouquíssimos atualmente. Já utilizamos, nas faculdades de medicina, meios de diminuir a quantidade de animais, como filmar as aulas. Mas nem sempre é possível. Em um momento, os alunos precisam do animal”, afirma Morales.

(Renata Mari/Correio Braziliense)

http://www.correiobraziliense.com.br/app/noticia/politica-brasil-economia/33,65,33,12/2013/10/25/interna_brasil,395271/cirurgia-em-porco-acirra-debate-do-uso-de-cobaias-em-experimentos.shtml

Matéria da Folha de S.Paulo sobre o assunto:

Ativistas invadem aula prática com porcos na PUC de Campinas

http://www1.folha.uol.com.br/fsp/cotidiano/135652-ativistas-invadem-aula-pratica-com-porcos-na-puc-de-campinas.shtml

*   *   *

Fiocruz divulga nota pública em defesa do uso de animais em pesquisas científicas (Jornal da Ciência)

Documento ressalta que medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais 

A Fundação Oswaldo Cruz (Fiocruz), divulgou nota em que reafirma perante a sociedade seu compromisso ético no uso de animais para finalidades científicas. O texto ressalta que a ciência não pode prescindir do uso de animais em experimentação. “Medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais”, diz o texto.

O documento aborda o fato de as pesquisas científicas envolvendo animais serem pautadas por princípios de bem-estar animal, e de que a atividade é regulamentada por dispositivos legais nacionais e internacionais. A Fiocruz lembra ainda que a Lei 11.794/2008, que regulamenta o uso científico de animais, foi amplamente defendida por sua comunidade.

(Jornal da Ciência)

Leia o documento:

Nota pública: a Fiocruz e o uso de animais em pesquisas científicas

A Fundação Oswaldo Cruz (Fiocruz), instituição que desde 1900 atua a serviço da saúde pública e da população brasileira, frente aos acontecimentos recentes observados no país, vem a público cumprir seu papel de esclarecimento e reafirmar perante a sociedade seu compromisso ético no uso de animais para finalidades científicas.

É fundamental ressaltar que, apesar de muitos esforços em todo o mundo, nas condições atuais, a ciência não pode prescindir do uso de animais em experimentação. Importante pontuar ainda que os medicamentos, vacinas e alternativas terapêuticas disponíveis hoje para uso humano dependeram de fases anteriores de experimentação em animais. As atividades de experimentação animal são necessárias, inclusive, no campo da veterinária.

As pesquisas científicas envolvendo animais são pautadas pelos princípios de bem-estar animal, adotando-se, dentre outros, os critérios de redução, utilizando-se o menor número possível de animais a cada experimento, e de substituição do uso de animais por outra estratégia sempre que tecnicamente viável.

A atividade é regulamentada por dispositivos legais nacionais e internacionais, ao mesmo tempo em que vigoram instâncias regulatórias de diversos níveis, ligadas ao Governo Federal (Conselho Nacional de Controle de Experimentação Animal – Concea), aos Conselhos de Veterinária e também no âmbito interno das instituições científicas (os Comitê de Ética no Uso de Animais – CEUAs).

A Fiocruz aproveita a oportunidade para informar à sociedade que a Lei 11.794/2008, que regulamenta a Constituição Federal sobre o uso científico de animais, foi amplamente defendida por sua comunidade, inclusive tendo sido relatada pelo então deputado federal Sergio Arouca, sanitarista e ex-presidente da Fiocruz. Além disso, a Fundação foi uma das primeiras instituições a estabelecer uma CEUA no país. Esta instância é responsável por aprovar todos os projetos científicos que incluem o uso de animais, verificando a ética nos procedimentos, a quantidade de animais, entre outras questões.

Secretária-executiva de painel da ONU chora ao falar sobre mudanças climáticas (O Globo)

23 de outubro de 2013

Christiana Figueres, chefe do IPCC, ficou emocionada ao falar sobre impacto das alterações nas futuras gerações em conferência em Londres

Secretária-executiva do Painel Intergovernamental sobre Mudanças Climáticas da ONU (IPCC), a costa-riquenha Christiana Figueres fez uma defesa apaixonada das negociações em torno de um novo acordo global para combater o problema em conferência nesta segunda-feira em Londres. Em seu discurso durante o evento, Figueres reclamou da lentidão nas conversas, mas mostrou-se otimista quanto à possibilidade de em 2015 ser assinado um acerto que obrigue o cumprimento de metas de redução da emissão de gases do efeito estufa pelos principais países poluidores do mundo a partir de 2020.

– Sempre fico frustrada com o ritmo das negociações, nasci impaciente – disse Figueres. – Estamos avançando muito, muito devagar, mas estamos indo na direção certa e é isso que me dá coragem e esperança.

E Figueres manteve o tom apaixonado depois do discurso. Abordada por um repórter da rede britânica de TV BBC, que lhe perguntou sobre o impacto das mudanças climáticas, ela ficou emocionada e chegou a chorar.

– Estou comprometida com (a luta contra) as mudanças climáticas por causa das futuras gerações e não por nós, certo? Nós estamos partindo daqui – disse. – Simplesmente sinto que é totalmente injusto e imoral o que estamos fazendo com as futuras gerações. Estamos condenando elas antes mesmo de elas nascerem. Mas temos uma escolha sobre isso, esta é a questão, temos uma escolha. Se (as mudanças climáticas) forem inevitáveis, então que sejam, mas temos a escolha de tentar mudar o futuro que vamos dar às nossas crianças.

(O Globo)

http://oglobo.globo.com/ciencia/secretaria-executiva-de-painel-da-onu-chora-ao-falar-sobre-mudancas-climaticas-10488256#ixzz2iYDyHz8N

Homem evolui mais devagar que macaco, diz estudo (Folha de S.Paulo)

24 de outubro de 2013

Reportagem da Folha de SP mostra que pesquisa descobriu que diferenças entre espécies está em genes ativos

A comparação da atividade genética de humanos com a de chimpanzés sugere que o Homo sapiens está evoluindo de forma mais lenta que os macacos. A descoberta foi feita por cientistas que investigam por que o homem e seu primo mais próximo são tão diferentes, apesar de terem 98% do DNA idêntico.

O segredo das diferenças físicas e comportamentais está em quais genes são de fato ativos em cada espécie. Analisando células embrionárias, a brasileira Carolina Marchetto, do Instituto Salk, de San Diego (EUA), descobriu mecanismos que freiam a taxa de transformação genética da espécie humana.

A descoberta favorece a hipótese de que o advento da cultura desacelerou a evolução biológica: uma vez que humanos se adaptam a distintos ambientes usando o conhecimento, nossa espécie não depende mais tanto de variação genética para evoluir e sobreviver a mudanças.

Já os macacos, mamíferos de cognição mais limitada, precisam que seu DNA evolua de forma rápida para sobreviver a mudanças: eles não têm como compensar a falta de características inatas necessárias usando apenas conhecimento e tecnologia.

Mas o DNA humano também não carece de evoluir? “Não sabemos o que estamos pagando por isso em termos de adaptação, mas por enquanto funciona de forma eficiente”, diz Marchetto.

O trabalho da cientista, descrito hoje na revista “Nature”, ajuda a explicar o mistério da maior diversidade do DNA símio. Um leigo pode achar que todos os chimpanzés são iguais, mas uma só colônia selvagem desses macacos na África tem mais variabilidade genética do que toda a humanidade.

O PULO DO GENE

Segundo o estudo de Marcheto, a maior variabilidade genética dos macacos tem a ver com os chamados transpósons, genes que saltam de um lugar para outro dos cromossomos. Nesse processo, os transpósons reorganizam o genoma, ativando alguns genes e desativando outros.

Esses “genes saltadores” são bastante ativos em chimpanzés e bonobos (macacos igualmente próximos da linhagem humana). Em humanos, o transpóson é suprimido por dois outros genes que são ativados em abundância e inibem o “pulo” genético.

Chimpanzés, de certa forma, precisam de transpósons. Com ferramentas rudimentares e sem linguagem para transmitir conhecimento, eles têm de oferecer maior variabilidade genética à seleção natural para que ela os torne mais bem adaptados, caso o ambiente se altere.

A pesquisa de Marchetto só foi possível porque seu o laboratório no Salk, liderado pelo biólogo Fred Gage, domina a técnica de reverter células ao estágio embrionário.

O material usado na pesquisa foi extraído da pele de macacos e pessoas, pois há uma série de limitações para o uso de embriões em experimentos científicos.

Revertido ao estágio de “células pluripotentes induzidas”, o tecido cutâneo se comporta como embrião, e é possível investigar a biologia molecular dos estágios iniciais do desenvolvimento, quando o surgimento de diversidade genética tem consequências futuras.

“Uma das coisas especiais do nosso estudo é que a reprogramação de células de chimpanzés e bonobos nos dá um modelo para começar a estudar questões evolutivas que antes não tínhamos como abordar”, diz Marchetto.

RUMO AO CÉREBRO

As diferenças de ativação de genes entre humanos e chimpanzés, explica, não se restringem a células embrionárias. A ideia de Marcheto e de seus colegas agora é transformar essas células em neurônios, por exemplo, para entender como a biologia molecular de ambos se altera durante a formação do cérebro.

(Rafael Garcia/ Folha de São Paulo)

http://www1.folha.uol.com.br/ciencia/2013/10/1361208-homem-evolui-mais-devagar-que-macaco-diz-estudo.shtml

Mais sobre o salvamento dos Beagles do Instituto Royal (24/10/2013)

Animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta, diz ex-coordenador do Concea (Jornal da Ciência)

Em entrevista exclusiva ao Jornal da Ciência, Renato Cordeiro, pesquisador da Fiocruz, fala da importância da experimentação animal para a ciência 

Pesquisador titular da Fiocruz, Renato Cordeiro foi o primeiro coordenador do Conselho Nacional de Controle de Experimentação Animal (Concea), criado pela Lei Arouca (Lei 11.794/ 2008), que regulamenta a criação e utilização de animais em atividades de ensino e pesquisa científica no país. Nesta entrevista exclusiva para o Jornal da Ciência, ele fala das regras e da importância desse tipo de experimentação para a humanidade.

As declarações de Cordeiro serão utilizadas em matéria a ser publicada na próxima edição do Jornal da Ciência impresso. As reportagens abordarão diversos aspectos da invasão do Instituto Royal, em São Roque, a 59 quilômetros de São Paulo, na madrugada do dia 18 de outubro. Durante a invasão, ativistas de proteção dos animais levaram do instituto 178 cães da raça beagle e sete coelhos, deixando para trás centenas de ratos. O ato tem sido rechaçado pela comunidade científica e visto como prejudicial à ciência.

Jornal da Ciência – Qual a importância da utilização de animais na pesquisa científica?

Cordeiro – Grandes avanços na saúde pública  foram propiciados à humanidade, graças à utilização de animais na pesquisa científica. Podemos citar como exemplos a descoberta e o controle de qualidade de vacinas contra a pólio, o sarampo, a difteria, o tétano, a hepatite, a febre amarela e a meningite. Testes com animais também foram essenciais para a descoberta de anestésicos, de antibióticos e dos anti-inflamatórios, de fármacos para o controle da hipertensão arterial e diabetes , da dor e da asma, para tratamento da ansiedade , dos antidepressivos, dos quimioterápicos, e dos hormônios anticoncepcionais. Atualmente, vários trabalhos estão sendo desenvolvidos em laboratórios brasileiros visando a descoberta de vacinas e medicamentos para a Malária, a Aids , Dengue , Tuberculose e outras doenças. Poderíamos, portanto, dizer que os animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta.

Por que não é possível abrir mão desse tipo de experimentação?

Embora técnicas altamente sofisticadas e equipamentos com alta tecnologia sejam necessários para que algumas pesquisas sejam realizadas, em virtude da complexidade da célula biológica, o uso de animais de laboratório ainda é necessário para sua execução. Vale ressaltar que vários pesquisadores, no Brasil e no exterior, já desenvolvem grandes esforços visando a descoberta de métodos alternativos, para que algum dia os animais não sejam mais necessários ou utilizados em pesquisas experimentais. Atualmente, porém, somente em alguns poucos casos a Biologia Celular e Molecular, através de técnicas de cultura de tecidos, e simulações computacionais oferecem essa possibilidade. Neste sentido, é um grande equívoco, irresponsabilidade e desconhecimento da realidade ir para a mídia afirmar que os animais não são mais necessários para a descoberta de novas vacinas, medicamentos e terapias.

De que forma esses testes são regulados?

No Brasil, um grande marco para a pesquisa cientifica na área da saúde ocorreu com a aprovação da Lei 11.794 de outubro de 2008, conhecida como Lei Arouca, que regulamentou a criação e utilização de animais em atividades de ensino e pesquisa científica no país. A nova lei criou o Conselho Nacional de Controle de Experimentação Animal (Concea), e obrigou as Instituições de Pesquisa a constituírem uma Comissão de Ética no Uso de Animais (Ceua).

A Resolução nº 1 do Concea determinou as competências das Ceuas , que são os componentes essenciais para aprovação, controle e vigilância das atividades de criação, ensino e pesquisa científica com animais, bem como para garantir o cumprimento das normas de controle da experimentação animal editadas pelo Concea.

As Ceuas representam uma grande mudança de cultura na ciência e são formadas por médicos veterinários e biólogos, docentes e pesquisadores na área especifica da pesquisa cientifica , e um representante de sociedades protetoras de animais legalmente constituídas e estabelecidas no país.

Ligado ao Ministério da Ciência e Tecnologia, o Concea tem apresentado um desempenho excelente. Uma de suas principais competências é expedir e fazer cumprir normas relativas à utilização humanitária de animais com finalidade de ensino e pesquisa cientifica; credenciar instituições brasileiras para a criação ou utilização de animais em ensino e pesquisa científica; e monitorar e avaliar a introdução de técnicas alternativas que substituam a utilização de animais em ensino e pesquisa.

E como é a participação das entidades defensoras dos animais nesse processo?

Os representantes das sociedades protetoras de animais legalmente estabelecidas no país são profissionais muito qualificados, com formação pós-graduada em nível de doutorado, e têm dado excelentes contribuições nas discussões e deliberações do Concea. A Diretriz Brasileira para o Cuidado e a Utilização de Animais para fins Cientificos e Didáticos (DBCA), publicada na Resolução Normativa 12 , de 20 de setembro de 2013 , uma bíblia para os laboratórios de pesquisa no Brasil, é um dos recentes exemplos de competência dos membros do colegiado.

Qual a importância da experimentação animal para os próprios animais?

Os animais domésticos como cães e gatos e os de interesse econômico como bovinos, suínos e aves também têm sido beneficiados com os avanços da ciência no campo da terapêutica e cirurgia experimental. Poderíamos destacar as vacinas para a raiva ,a cinomose , a febre aftosa, as pesquisas com o vírus da imunodeficiência felina, a tuberculose e várias doenças infecto-parasitárias.

(Mario Nicoll / Jornal da Ciência)

Leia também:

ABC e SBPC se manifestam contra a invasão do Instituto Royal – Texto assinado em conjunto pelos presidentes das entidades, Jacob Palis e Helena Nader

http://www.jornaldaciencia.org.br/Detalhe.php?id=90153

Especialista da Fiocruz considera equívoco invasão ao Instituto Royal – Para Marco Aurélio Martins, o ataque de ativistas aos experimentos científicos é uma tentativa de desinformar “irresponsavelmente” a população

http://www.jornaldaciencia.org.br/Detalhe.php?id=90093

*   *   *

Instituto Royal nega que usava animais em testes de cosméticos ou de produtos de limpeza (Agência Brasil)

Médico considera “sensacionalismo” as imagens publicadas por ativistas em redes sociais com cães mutilados

O Instituto Royal negou ontem (23), por meio de um vídeo gravado pela gerente-geral da instituição, Silva Ortiz, que fazia teste de cosméticos ou de produtos de limpeza nos animais. Na madrugada de sexta-feira (18), ativistas invadiram o instituto e retiraram 178 cães da raça beagle, que eram usados em testes científicos. Os ativistas alegaram que os animais foram vítimas de maus-tratos e que eram usados como cobaias em testes de cosméticos e produtos de limpeza.

“Nós não fazemos testes de cosméticos em animais, este tipo de teste é feito apenas pelo método in vitro, ou seja, dentro de equipamentos de laboratórios, sem animais”, disse a gerente. Segundo ela, as pesquisas eram voltadas para medicamentos e fitoterápicos, para tratar doenças como câncer, diabetes, hipertensão e epilepsia, entre outras, bem como para o desenvolvimento de medicamentos antibióticos e analgésicos. “O objetivo é testar a segurança de novos medicamentos de forma que possam ser usados por pessoas como eu e você”.

De acordo com o médico Marcelo Morales, coordenador do Conselho Nacional de Controle de Experimentação Animal (Concea) e membro da diretoria da Sociedade Brasileira para o Progresso da Ciência (SBPC), os animais retirados pelos ativistas do laboratório estão em perigo.

“Não se pode tirar animais que foram criados em biotérios [instalação com características próprias e adequadas, como um ambiente protegido, onde são criados ou mantidos animais utilizados com cobaias em testes] dessa forma repentina, porque eles podem morrer. Eles estão em risco neste momento. Esses animais são especiais, eles tem que ter atenção de médicos veterinários desde que nasceram. Havia animais idosos, com problemas renais e que eram acompanhados diariamente. Quando ele são retirados do instituto, estão em perigo. Até prontuários foram roubados”, disse.

O médico considera “sensacionalismo” as imagens publicadas por ativistas em redes sociais com cães mutilados. “Animal sem olho é sensacionalismo dos ativistas. O animal que apareceu com a língua ferida se feriu durante uma briga com outro animal e foi tratado. Já estava totalmente sem problemas”, informou.

De acordo com a presidente da Comissão de Ética em Experimentação Animal (Ceea) da Unicamp, Ana Maria Guaraldo, a evolução das pesquisas em células-tronco, da distrofia muscular e da doença de Chagas foi possível por meio da pesquisa com animais. “O marcapasso foi primeiro utilizado para o cão. Hoje quantas pessoas estão com a vida melhor porque arritmia está normal?”, questiona a pesquisadora.

Ana Maria defende que os ativistas se informem mais sobre as pesquisas em laboratório com animais e descarta a substituição total de animais em pesquisas científicas. “Dentro da lei existe uma previsão de que os métodos alternativos serão desenvolvidos e validados para diminuir o tipo de animais que se adota. O processo leva, em média, dez anos até chegar a validação desses novos métodos e quem desenvolve os métodos alternativos são os pesquisadores dentro de laboratórios”, explica.

Nas pesquisas são usados diversos tipos de animais, como camundongos, ratos, cães, ovelhas, peixes, gambás, tatus, pombas, primatas, codornas, equinos, entre outros. Segundo a pesquisadora, as novas moléculas devem ser testadas em dois roedores e um terceiro animal não roedor para que as pesquisas obtenham validação, segundo protocolos internacionais. “Os cães da raça beagle são dóceis e têm tamanho compatível. São animais que têm toda uma padronização internacional e já estão nos laboratórios do mundo todo há muito tempo”, disse Ana Maria.

Do outro lado, a coordenadora do Laboratório de Ética Prática, do Departamento de Filosofia da Universidade Federal de Santa Catarina (UFSC), Sônia Felipe, defende a extinção do uso de animais em pesquisas científicas. A professora alega que os métodos que usam animais podem ser cruéis e causar extremo sofrimento aos animais. “Os experimentos mais dolorosos, os de infecções, inflamações, os neurológicos, os lesivos com ácidos, fogo e todo tipo de danos internos ou externos não admitem analgesia, nem anestesia, porque mascara o resultado”, explica.

A pesquisadora também aponta que há alternativas para pesquisa científica sem o uso de animais, mas há desinteresse da indústria farmacêutica em aprofundar os conhecimentos em protocolos alternativos. “Essas formas estão relegadas pela ciência, porque muitas delas não fariam qualquer pessoa dirigir-se às farmácias na esperança de obter alívio ou cura para suas doenças. Se os humanos estão doentes, a maioria deles é por seguir uma dieta agressiva para sua saúde”, acredita Sônia Felipe.

(Heloisa Cristaldo/ Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/instituto-royal-nega-que-usava-animais-em-testes-de-cosmeticos-ou-de-produtos-de-limpeza

O Globo

Instituto Royal divulga vídeo negando maus tratos e uso de cosméticos em beagles

http://oglobo.globo.com/pais/instituto-royal-divulga-video-negando-maus-tratos-uso-de-cosmeticos-em-beagles-10517592#ixzz2ieEqfs7G

*   *   *

Ministro diz que invasão de ativistas ao Instituto Royal foi “um crime” (Agência Brasil)

Segundo o ministro, quando se discutiu a legislação, discutiu-se também a necessidade que a comunidade científica de fazer testes com relação a novos medicamentos

O ministro da Ciência, Tecnologia e Inovação, Marco Antonio Raupp, condenou ontem (23), na Câmara dos Deputados, a invasão do Instituto Royal, em São Paulo, por ativistas de direitos dos animais. Para o ministro, o episódio, ocorrido na sexta-feira (18) passada, foi um “crime”. No incidente, os militantes retiraram do local 178 cachorros da raça beagle que eram usados em pesquisa científica.

“Essa invasão é um crime. Foi feita à revelia da lei. Quando se discutiu a legislação, discutiu-se também a necessidade que a comunidade científica tem – tanto as agências públicas, as universidades como as empresas – de fazer testes com relação a novos medicamentos. Em todo o mundo é assim. Não é só no Brasil não.”

Raupp foi à Câmara dos Deputados para participar de audiência pública conjunta de comissões temáticas da Casa sobre o Projeto de Lei do Código Nacional de Ciência e Tecnologia (PL 2.177/2011) que teve parecer apresentado hoje pelo relator, deputado Sibá Machado (PT-AC). Segundo o ministro, pela sua importância, trata-se de uma “miniconstituinte da Ciência e Tecnologia”, que vai dar um grande impulso ao setor no país.

Ficou decidido que a Comissão de Ciência e Tecnologia da Câmara dos Deputados vai pedir ao colégio de líderes, na próxima semana, para colocar em votação no plenário o projeto de lei. A votação na comissão também ficou para a próxima semana, mas antes o relator vai se reunir com representantes de ministérios que participaram da audiência – Educação; Ciência, Tecnologia e Informação; Desenvolvimento, Indústria e Comércio Exterior; e Defesa – para discutir alterações no substitutivo que apresentou, acolhendo pontos considerados importantes por esses setores.

(Jorge Wamburg/Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/ministro-diz-que-invasao-de-ativistas-ao-instituto-royal-foi-%E2%80%9Cum-crime%E2%80%9D

*   *   *

Anvisa analisa legislação que trata do uso de animais para fins científicos (Agência Brasil)

As regras para o uso de animais em pesquisa são definidas pela Lei Arouca e pelos comitês de ética em pesquisa com animais ligados ao Sistema de Comitês de Ética em Pesquisa

A legislação que trata do uso de animais para fins científicos e didáticos está sob análise da Agência Nacional de Vigilância Sanitária (Anvisa). A autarquia avalia se há lacunas referentes à fiscalização das pesquisas para produção de medicamentos e cosméticos que podem ter impacto no uso de cobaias.

De acordo com a Anvisa, a legislação atual não especifica o órgão responsável pela fiscalização dos laboratórios de pesquisa em animais. No âmbito da agência reguladora, não há exigência expressa para o uso de animais em testes, mas é necessária a apresentação de dados que comprovem a segurança dos diversos produtos registrados na Anvisa. Métodos alternativos são aceitos desde que sejam capazes de comprovar a segurança do produto.

Na semana passada, a autarquia informou, por meio de nota, ter firmado há dois anos cooperação com o Centro Brasileiro de Validação de Métodos Alternativos (Bracvam), ligado à Fundação Oswaldo Cruz (Fiocruz), para que sejam validados métodos alternativos que dispensem o uso de animais.

As regras para o uso de animais em pesquisa são definidas pela Lei 11.794, batizada de Lei Arouca, e pelos comitês de ética em pesquisa com animais ligados ao Sistema de Comitês de Ética em Pesquisa. Por definição da Lei Arouca, as instituições que executam atividades com animais podem receber cinco tipos de punição, que vão da advertência e suspensão de financiamentos oficiais à interdição definitiva do laboratório. A multa pode variar entre R$ 5 mil e R$ 20 mil.

Responsável por regular as atividades científicas com animais, o Conselho Nacional de Controle de Experimentação Animal (Concea), ligado ao Ministério da Ciência, Tecnologia e Inovação, determina, por meio de diretriz, que atividades científicas ou didáticas devem considerar a substituição do uso dos animais, a redução do número de cobaias usadas, além do refinamento de técnicas que permitam reduzir o impacto negativo sobre o bem-estar deles.

A diretriz também orienta os profissionais a escolher métodos humanitários para condução dos projeto e a avaliar os animais regularmente para observar evidências de dor ou estresse agudo no decorrer do projeto e a usar agentes tranquilizantes, analgésicos e anestésicos adequados para a espécie animal e para os objetivos científicos ou didáticos.

(Heloisa Cristaldo/ Agência Brasil)

http://agenciabrasil.ebc.com.br/noticia/2013-10-23/anvisa-analisa-legislacao-que-trata-do-uso-de-animais-para-fins-cientificos

*   *   *

ABC e SBPC se manifestam contra a invasão do Instituto Royal

23 de outubro de 2013

Texto assinado em conjunto pelos presidentes das entidades, Jacob Palis e Helena Nader

A Academia Brasileira de Ciências e a Sociedade Brasileira para o Progresso da Ciência em conjunto com as demais entidades representantes da Comunidade Científica rechaçam os atos violentos praticados contra o Instituto Royal, em São Roque-SP, que realiza estudos de avaliação de risco e segurança de novos medicamentos.

É importante esclarecer a sociedade brasileira sobre  o importante trabalho de pesquisa realizado no Instituto Royal voltado para o desenvolvimento do Brasil. O Instituto foi credenciado pelo Conselho Nacional de Controle em Experimentação Animal (CONCEA) e cada um de seus projetos avaliados e aprovados  por um Comitê de Ética para o Uso em Experimentação Animal (CEUA), obedecendo em todos os aspectos ao estabelecido pela Lei Arouca, número 11.794, aprovada pelo Congresso Nacional em 2008. Esta lei regulamenta o uso responsável de criação e utilização de animais em atividades de ensino e pesquisa científica, em todo o território nacional, impedindo que a vida animal seja sacrificada em vão.

Saibam os cidadãos brasileiros que o CONCEA conta em seus quadros com representantes das Sociedades Protetoras de Animais legalmente estabelecidas no País, e que na história da medicina mundial, descobertas fundamentais foram realizadas, milhões de mortes evitadas e expectativas de vida aumentadas, graças à utilização dos animais em pesquisas para a saúde humana e animal.

O Instituto Royal é dirigido pelo Prof. João Antonio Pegas Henriques, Membro Titular da Academia Brasileira de Ciências e sócio ativo da SBPC, pesquisador 1-A do CNPq, orientador de programas de pós-graduação, sempre criterioso, competente. Este Instituto é de sobremaneira importante para que o Brasil venha se capacitar de forma efetiva na produção de medicamentos e insumos para a saúde humana e animal.

É fundamental que as autoridades, mas principalmente que a sociedade em geral, impeçam atos equivocados que destroem anos de importante atividade científica, e garantam as atividades de pesquisa desenvolvidas nas Universidades e Instituições de Pesquisa brasileiras.

Em 22 de Outubro de 2013

Jacob Palis

Presidente da Academia Brasileira de Ciências

Helena Bociani Nader

Presidente da Sociedade Brasileira para o Progresso da Ciência

*   *   *

Comportamento animal (Folha de S.Paulo)

23 de outubro de 2013

Editorial da Folha de S.Paulo sobre experimentos científicos em animais

O uso de animais em experimentos científicos é um tema de debate público que pode ser facilmente enredado numa polarização estéril.

Num extremo se aglutina o radicalismo sentimental dos que reputam defensável violar leis e propriedades para “salvar” animais de alegados maus-tratos. No outro, o pragmatismo míope dos que tomam o avanço da pesquisa como um valor superior a justificar qualquer forma de sofrimento animal.

O acirramento se repetiu em diversos países e, como no Brasil, o debate se desencaminhou – estão aí, para prová-lo, a invasão de um biotério em São Roque (SP) e a legião de apoiadores que encontrou.

Não se chegou aqui, ainda, ao paroxismo alcançado no Reino Unido em 2004, quando a Frente de Libertação Animal impediu, com ameaças e ataques, a construção de centros de testes com animais em Oxford e Cambridge.

Faz muito, entretanto, que a discussão se emancipou do extremismo irracional. Pesquisadores são grandes interessados em diminuir o uso de animais, porque isso custa caro e expõe seus estudos a questionamentos éticos.

Em alguns casos, porém, tal recurso ainda é inevitável, como testes de carcinogenicidade (capacidade de provocar tumores). Banir todas as cobaias implicaria impedir testes de segurança em novos produtos, muitos dos quais criados para aliviar o sofrimento humano.

É inescapável, assim, render-se a uma hierarquia de valores entre as espécies: uma vida humana vale mais que a de um cão, que vale mais que a de um rato. Os próprios invasores do instituto em São Roque, aliás, resgataram 178 cães e deixaram os roedores para trás.

Isso não significa autorizar cientistas a atormentar, mutilar ou sacrificar quantos animais quiserem. A tendência civilizatória tem sido submetê-los ao que ficou conhecido, em inglês, como a regra dos três Rs: “replacement” (substituição), “reduction” (redução) e “refinement” (aperfeiçoamento).

Em primeiro lugar, trata-se de encontrar substitutos. Muito progresso se fez com sistemas “in vitro”, como o cultivo de tecidos vivos para testar substâncias potencialmente tóxicas. Depois, quando os animais são imprescindíveis, cabe reduzir ao mínimo o número de espécimes. O terceiro imperativo é refinar métodos para prevenir sofrimento desnecessário.

São os princípios que governam várias leis nacionais sobre a questão, como a de número 11.794/2008 no Brasil. Numa democracia viva, como a nossa, há caminhos institucionais tanto para cumpri-la quanto para modificá-la, e invasões tresloucadas não se encontram entre os admissíveis.

(Folha de S. Paulo)

http://www1.folha.uol.com.br/fsp/opiniao/135200-comportamento-animal.shtml

Texto complementar publicado na Folha:

O sentimento dos animais

http://www1.folha.uol.com.br/fsp/cotidiano/135252-o-sentimento-dos-animais.shtml

*   *   *

FeSBe divulga manifesto em repúdio à invasão do Instituto Royal

23 de outubro de 2013

Representante de sociedades científicas ligadas à biologia experimental considera que depredações, vandalismo e roubo devem ser punidos com rigor

A Federação de Sociedades de Biologia Experimental (FeSBE) divulgou manifesto para expressar seu repúdio à invasão do Instituto Royal, em São Roque, SP. De acordo com o texto, a sociedade quer melhor qualidade de vida, que a expectativa de vida aumente e que a saúde animal evolua no mesmo ritmo. “A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos”, diz o manifesto.

Leia o documento na íntegra:

“Manifesto sobre experimentação animal

A Federação de Sociedades de Biologia Experimental (FeSBE) vem a público expressar o seu repúdio à invasão, depredação e furto qualificado de animais de experimentação do Instituto Royal, em São Roque. Na segunda década do século XXI, não é mais possível que atitudes como essa, só explicáveis pelo obscurantismo que ainda domina grupos minoritários de nossa sociedade, sejam toleradas, em qualquer nível. O referido Instituto segue normas técnicas e éticas do Conselho Nacional de Controle da Experimentação Animal (CONCEA), além dos requisitos de outros organismos nacionais e internacionais, conduzindo pesquisas de elevada relevância no desenvolvimento de medicamentos e outros produtos, fundamentais tanto na saúde humana como animal! Assim, destruir um patrimônio desses ou impedir que a instituição continue a fazer essas pesquisas implica inclusive em desrespeito aos próprios animais. A Lei 11794, ou Lei Arouca, rege as pesquisas com animais no Brasil, e deve ser respeitada como as outras leis que regem todas as nossas atitudes diárias como cidadãos. Transgressões eventuais da Lei Arouca devem ser punidas com todo o rigor da Lei; depredações, vandalismo, roubo e bloqueio dos direitos de outros também devem ser punidos com o mesmo rigor, dentro do Estado de Direito em que vivemos. Qualquer postura diferente dessa significa o afastamento do Estado de Direito, com as óbvias consequências que daí podem advir.

A FeSBE, como representante de sociedades científicas ligadas à biologia experimental, apoia e sempre apoiará as pesquisas científicas conduzidas dentro dos princípios científicos e éticos, que são de domínio público, incluindo os que regem a experimentação animal. A sociedade em geral quer uma melhor qualidade de vida, quer que a expectativa de vida aumente e quer que a saúde animal evolua no mesmo ritmo. A pesquisa científica tem respondido a essa demanda, mas é preciso que o obscurantismo seja erradicado do nosso meio para que a sociedade possa usufruir dos recentes avanços científicos e dos que ainda serão produzidos nos próximos tempos.

Diretoria da FeSBE”

Aumentam as doenças crônicas entre indígenas do Xingu (Fapesp)

Antes raras ou inexistentes, elas agora apresentam índices preocupantes entre o povo Khisêdjê, indica estudo feito na Unifesp (foto: Gimeno e colaboradores)

14/10/2013

Por Noêmia Lopes

Agência FAPESP – Malária, infecções respiratórias e diarreias eram as principais causas de morte no Parque Indígena do Xingu (PIX), no Mato Grosso, em 1965 – época em que a Escola Paulista de Medicina (EPM), atualmente parte da Universidade Federal de São Paulo (Unifesp), passou a responder pela saúde dos povos indígenas que lá vivem.

Hoje, a malária está sob controle e, embora as doenças infecciosas e parasitárias ainda sejam relevantes em termos de mortalidade, são os males crônicos não transmissíveis, como hipertensão, intolerância à glicose e dislipidemia (aumento anormal da taxa de lipídios no sangue), que estão em crescimento.

Conhecendo esse panorama, pesquisadores da EPM/Unifesp examinaram e entrevistaram 179 índios Khisêdjê, moradores da área central do parque do Xingu, no Mato Grosso, entre 2010 e 2011.

A análise dos resultados mostrou uma prevalência de hipertensão arterial de 10,3% em ambos os sexos, sendo que 18,7% das mulheres e 53% dos homens apresentaram níveis de pressão arterial considerados preocupantes.

“Para valores de pressão arterial iguais ou maiores a 140/90 mmHg como indicativos da presença de hipertensão arterial, pesquisas encontraram prevalências entre 22,3% e 43,9% na população geral do Brasil”, disse Suely Godoy Agostinho Gimeno, coordenadora do estudo com os Khisêdjê e pesquisadora do Departamento de Medicina Preventiva da EPM/Unifesp e do Instituto de Saúde da Secretaria Estadual da Saúde de São Paulo.

O estudo com os Khisêdjê foi realizado com apoio da FAPESP e do Projeto Xingu, iniciativa da Unidade de Saúde e Ambiente do Departamento de Medicina Preventiva da EPM/Unifesp.

Os Khisêdjê ainda não estão tão hipertensos como os demais brasileiros, mas o cenário é delicado, uma vez que tal condição era inexistente ou rara nas aldeias brasileiras até décadas atrás.

Já a intolerância à glicose foi identificada em 30,5% das mulheres (6,9% do total com diabetes mellitus) e em 17% dos homens (2% do total com diabetes mellitus). E a dislipidemia (aumento anormal da taxa de lipídios no sangue) apareceu em 84,4% dos participantes dos dois sexos.

“Examinamos os Khisêdjê anteriormente, entre 1999 e 2000. Comparando os dados daquela época com os mais recentes, percebemos um aumento significativo de todas essas doenças crônicas não transmissíveis. Outras pesquisas revelam que o mesmo aumento se aplica aos demais povos indígenas do Xingu e de outras áreas do país”, disse Gimeno.

De acordo com a pesquisadora, entre os fatores que vêm transformando o panorama entre os índios estão maior proximidade com os centros urbanos e intensificação do contato com a sociedade não indígena, com a incorporação de novos hábitos e costumes; aumento do número de indivíduos que exercem atividade profissional remunerada, abandonando práticas de subsistência tradicionais como agricultura, caça e pesca; e maior acesso a produtos e bens de consumo, como alimentos industrializados, eletroeletrônicos e motor de barcos (o que dispensa a necessidade de remar).

Os resultados foram informados aos Khisêdjê, individualmente e em grupo, e a equipe de saúde da Unifesp acompanha os casos que precisam de amparo médico.

Ainda assim, o quadro preocupa os pesquisadores, uma vez que o controle das doenças requer condições nem sempre disponíveis nas aldeias, como refrigeração (no caso da insulina), controle da dose e do horário dos medicamentos, controle regular da glicemia e da pressão arterial. Segundo Gimeno, “o estímulo e a garantia da preservação dos hábitos e costumes desses povos seriam medidas preventivas de grande valia”.

Excesso de peso

A coleta de dados para traçar o perfil nutricional e metabólico dos Khisêdjê foi realizada em diferentes períodos de 2010 e 2011, quando os pesquisadores passavam de 15 a 20 dias na aldeia principal desse povo, chamada Ngojwere.

As informações levantadas incluíram perímetros de braços, cintura e quadril; peso; altura; composição corporal (água, massa magra e massa gordurosa); pressão arterial; perfil bioquímico (por exames como o de glicemia); aptidão física; condição socioeconômica; consumo de alimentos e práticas agrícolas.

Outro resultado obtido por meio dessa análise foi a prevalência de excesso de peso (de sobrepeso ou de obesidade): 36% entre as mulheres e 56,8% entre os homens.

“Contudo, observamos que, particularmente entre os homens, tal prevalência se deve a uma maior quantidade de massa muscular e não de tecido gorduroso. Esse dado sugere que, para a população em questão, os critérios de identificação do excesso de peso não são adequados, uma vez que os indivíduos são musculosos, não obesos”, disse Gimeno.

A conclusão, de acordo com a pesquisadora, é corroborada pelos testes de aptidão física. “A maioria dos valores revela força muscular nos membros inferiores, resistência muscular nos membros superiores e no abdômen, flexibilidade e capacidade cardiorrespiratória. Comparados aos não indígenas, os Khisêdjê têm perfil ativo ou muito ativo, contrariando a ideia de que um possível sedentarismo estaria associado às doenças investigadas”, disse.

Uma hipótese (não comprovada empiricamente) que poderia explicar a controvérsia é a de que, no passado, esses índios teriam sido ainda muito mais ativos do que na atualidade. E a possível redução na atividade física habitual teria, então, relação com os males crônicos.

Equipe e repercussão

Três médicos, quatro enfermeiras, cinco nutricionistas, dois educadores físicos, um sociólogo e quatro graduandos (dos cursos de Medicina e Enfermagem) participaram da pesquisa via Unifesp.

Completam a equipe uma sexta nutricionista do Instituto de Saúde da Secretaria Estadual da Saúde de São Paulo, agentes de saúde e professores indígenas que vivem na aldeia Ngojwere e atuaram como intérpretes.

O projeto deu origem a seis apresentações em conferências internacionais e duas em congressos nacionais, três dissertações de mestrado e uma publicação de artigo na revista Cadernos de Saúde Pública. A íntegra do texto pode ser lida em http://www.scielosp.org/pdf/csp/v28n12/11.pdf.

Gene Variants in Immune System Pathways Correlated With Composition of Microbes of Human Body (Science Daily)

Oct. 24, 2013 — Human genes in immunity-related pathways are likely associated with the composition of an individual’s microbiome, which refers to the bacteria and other microbes that live in and on the body, scientists reported today, Oct. 24, at the American Society of Human Genetics 2013 annual meeting in Boston.

Bacterial colonies on an agar plate. This study is the first genome-wide and microbiome-wide investigation to identify the interactions between human genetic variation and the composition of the microbes that inhabit the human body. (Credit: © anyaivanova / Fotolia)

“These genes are significantly enriched in inflammatory and immune pathways and form an interaction network highly enriched with immunity-related functions,” said Ran Blekhman, Ph.D., Assistant Professor, Department of Genetics, Cell Biology, and Development at the University of Minnesota, Minneapolis.

The study is the first genome-wide and microbiome-wide investigation to identify the interactions between human genetic variation and the composition of the microbes that inhabit the human body.

The skin, genital areas, mouth, and other areas of the human body, especially the intestines, are colonized by trillions of bacteria and other microorganisms. “Shifts in the composition of the species of the microbes have been associated with multiple chronic conditions, such as diabetes, inflammatory bowel disease and obesity,” noted Dr. Blekhman.

Dr. Blekhman and his collaborators found evidence of genetic influences on microbiome composition at 15 body sites of 93 people surveyed. “We found in our study that genetic variation correlated with the microbiome at two levels,” he said.

At the individual level, the mathematical procedure known as principal component analysis demonstrated that genetic variation correlated with the overall structure of a person’s microbiome.

At the species level, potential correlations between host genetic variation and the abundance of a single bacterial species were identified, said Dr. Blekhman, who conducted much of the research while a scientist in the lab of Andrew G. Clark, Ph.D., the Jacob Gould Schurman Professor of Population Genetics in the Department of Molecular Biology and Genetics at Cornell University, Ithaca, NY. Dr. Clark is the senior author of the abstract.

To identify the bacterial species that inhabited each human body site, the researchers mined sequence data from the Human Microbiome Project (HMP), an international program to genetically catalog the microbial residents of the human body.

Using a systems-level association approach, the researchers showed that variation in genes related to immune system pathways was correlated with microbiome composition in the 15 host body sites.

To shed light on the evolutionary history of the symbiosis between humans and their microbiomes, the researchers analyzed sequencing data from the 1000 Genomes Project, which is designed to provide a comprehensive resource on human genetic variation.

They found that the genes in the pathways linked to the composition of an individual’s microbiome vary significantly across populations. “Moreover, many of those genes have been shown in recent studies to be under selective pressure,” said Dr. Blekhman.

“The results highlight the role of host immunity in determining bacteria levels across the body and support a possible role for the microbiome in driving the evolution of bacteria-associated host genes,” he added.

Dr. Blekhman is currently investigating the combined role of host genetics and the microbiome in influencing an individual’s susceptibility to such diseases as colon cancer. His goal is to unravel the interaction between host genomic variation and the gut microbiome in colon cancer incidence, evolution and therapeutic response.

Aboriginal Hunting Practice Increases Animal Populations (Science Daily)

Oct. 24, 2013 — In Australia’s Western Desert, Aboriginal hunters use a unique method that actually increases populations of the animals they hunt, according to a study co-authored by Stanford Woods Institute-affiliated researchers Rebecca and Doug Bird. Rebecca Bird is an associate professor of anthropology, and Doug Bird is a senior research scientist.

Aboriginal hunters looking for monitor lizards as fires burn nearby. (Credit: Rebecca Bird)

The study, published today inProceedings of the Royal Society B, offers new insights into maintaining animal communities through ecosystem engineering and co-evolution of animals and humans. It finds that populations of monitor lizards nearly double in areas where they are heavily hunted. The hunting method — using fire to clear patches of land to improve the search for game — also creates a mosaic of regrowth that enhances habitat. Where there are no hunters, lightning fires spread over vast distances, landscapes are more homogenous and monitor lizards are more rare.

“Our results show that humans can have positive impacts on other species without the need for policies of conservation and resource management,” Rebecca Bird said. “In the case of indigenous communities, the everyday practice of subsistence might be just as effective at maintaining biodiversity as the activities of other organisms.”

Martu, the aboriginal community the Birds and their colleagues have worked with for many years, refer to their relationship with the ecosystem around them as part of “jukurr” or dreaming. This ritual, practical philosophy and body of knowledge instructs the way Martu interact with the desert environment, from hunting practices to cosmological and social organization. At its core is the concept that land must be used if life is to continue. Therefore, Martu believe the absence of hunting, not its presence, causes species to decline.

While jukurr has often been interpreted as belonging to the realm of the sacred and irrational, it appears to actually be consistent with scientific understanding, according to the study. The findings suggest that the decline in aboriginal hunting and burning in the mid-20th century, due to the persecution of aboriginal people and the loss of traditional economies, may have contributed to the extinction of many desert species that had come to depend on such practices.

The findings add to a growing appreciation of the complex role that humans play in the function of ecosystems worldwide. In environments where people have been embedded in ecosystems for millennia, including areas of the U.S., tribal burning was extensive in many types of habitat. Many Native Americans in California, for instance, believe that policies of fire suppression and the exclusion of their traditional burning practices have contributed to the current crisis in biodiversity and native species decline, particularly in the health of oak woodland communities. Incorporating indigenous knowledge and practices into contemporary land management could become important in efforts to conserve and restore healthy ecosystems and landscapes.

The study was funded by the National Science Foundation.

Journal Reference:

  1. R. B. Bird, N. Tayor, B. F. Codding, D. W. Bird. Niche construction and Dreaming logic: aboriginal patch mosaic burning and varanid lizards (Varanus gouldii) in AustraliaProceedings of the Royal Society B: Biological Sciences, 2013; 280 (1772): 20132297 DOI:10.1098/rspb.2013.2297

GENERAL OVERVIEW OF THE EFFECTS OF NUCLEAR TESTING (CTBTO)

The material contained in this chapter is based on official government sources as well as information provided by research institutions, policy organizations, peer-reviewed journals and eye witness accounts. 

http://www.ctbto.org/nuclear-testing/the-effects-of-nuclear-testing/

The CTBTO remains neutral in any ongoing disputes related to compensation for veterans of the nuclear test programmes.  

Nuclear weapons have been tested in all environments since 1945: in the atmosphere, underground and underwater. Tests have been carried out onboard barges, on top of towers, suspended from balloons, on the Earth’s surface, more than 600 metres underwater and over 200 metres underground. Nuclear test bombs have also been dropped by aircraft and fired by rockets up to 320 km into the atmosphere.

The National Resources Defense Council estimated the total yield of all nuclear tests conducted between 1945 and 1980 at 510 megatons (Mt). Atmospheric tests alone accounted for 428 mt, equivalent to over 29,000 Hiroshima size bombs.

Frigate Bird nuclear test explosion seen through the periscope of the submarine USS Carbonero (SS-337), Johnston Atoll, Central Pacific Ocean, 1962.

The first nuclear test was carried out by the United States in July 1945, followed by the Soviet Union in 1949, the United Kingdom in 1952, France in 1960, and China in 1964. The National Resources Defense Council estimated the total yield of all nuclear tests conducted between 1945 and 1980 at 510 megatons (Mt). Atmospheric tests alone accounted for 428 mt, equivalent to over 29,000 Hiroshima size bombs.

The amount of radioactivity generated by a nuclear explosion can vary considerably depending upon a number of factors. These include the size of the weapon and the location of the burst. An explosion at ground level may be expected to generate more dust and other radioactive particulate matters than an air burst. The dispersion of radioactive material is also dependent upon weather conditions.

Large amounts of radionuclides dispersed into the atmosphere

Levels of radiocarbon (C14) in the atmosphere 1945 – 2000. Image credit: Hokanomono.

The 2000 Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the General Assemblystates that:
“The main man-made contribution to the exposure of the world’s population [to radiation] has come from the testing of nuclear weapons in the atmosphere, from 1945 to 1980. Each nuclear test resulted in unrestrained release into the environment of substantial quantities of radioactive materials, which were widely dispersed in  the atmosphere and deposited everywhere on the Earth’s surface.”

The first nuclear test was carried out by the United States in July 1945, followed by the Soviet Union in 1949, the United Kingdom in 1952, France in 1960, and China in 1964.

Different types of nuclear tests: (1) atmospheric test; (2) underground test; (3) upper atmospheric test; and (4) underwater test.

Concern over bone-seeking radionuclides and the first mitigating steps

Prior to 1950, only limited consideration was given to the health impacts of worldwide dispersion of radioactivity from nuclear testing. Public protests in the 1950s and concerns about the radionuclide strontium-90 (see Chart 1) and its effect on mother’s milk and babies’ teeth were instrumental in the conclusion of the Partial Test Ban Treaty (PTBT) in 1963. The PTBT banned nuclear testing in the atmosphere, outer space and under water, but not underground, and was signed by the United States, the Soviet Union and the United Kingdom. However, France and China did not sign and conducted atmospheric tests until 1974 and 1980 respectively.

Although underground testing mitigated the problem of radiation doses from short-lived radionuclides such as iodine-131, large amounts of plutonium, iodine-129 and caesium-135 (See Chart 1) were released underground. In addition, exposure occurred beyond the test site if radioactive gases leaked or were vented.

Scientist arranging mice for radiation exposure investigations around 1944. (While conducting these experiments, the carcinogenesis of urethane was discovered).

Gradual increase in knowledge about dangers of radiation exposure

Over the past century, there has been a gradual accumulation of knowledge about the hazards of radioactivity. It was recognized early on that exposure to a sufficient radiation dosage could cause injuries to internal organs, as well as to the skin and the eyes.

According to the 2000 Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the UN General Assembly, radiation exposure can damage living cells, killing some and modifying others. The destruction of a sufficient number of cells will inflict noticeable harm on organs which may result in death. If altered cells are not repaired, the resulting modification will be passed on to further cells and may eventually lead to cancer. Modified cells that transmit hereditary information to the offspring of the exposed individual might cause hereditary disorders. Vegetation can also be contaminated when fallout is directly deposited on external surfaces of plants and absorbed through the roots. Furthermore, people can be exposed when they eat meat and milk from animals grazing on contaminated vegetation.

Radiation exposure has been associated with most forms of leukaemia, as well as cancer of the thyroid, lung and breast.

girl who lost her hair after being exposed to radiation from the bomb dropped on Hiroshima on 6 August 1945.

Studies reveal link between nuclear weapon testing and cancer

The American Cancer Society’s website explains how ionizing radiation, which refers to several types of particles and rays given off by radioactive materials, is one of the few scientifically proven carcinogens in human beings. Radiation exposure has been associated with most forms of leukaemia, as well as cancer of the thyroid, lung and breast. The time that may elapse between radiation exposure and cancer development can be anything between 10 and 40 years. Degrees of exposure regarded as tolerable in the 1950s are now recognized internationally as unsafe.

An article featured in Volume 94 of American Scientist onFallout from Nuclear Weapons Tests and Cancer Risksstates that a number of studies of biological samples (including bone, thyroid glands and other tissues) have provided increasing proof that specific radionuclides in fallout are implicated in fallout-related cancers.

It is difficult to assess the number of deaths that might be attributed to radiation exposure from nuclear testing. Some studies and evaluations, including an assessment by Arjun Makhijani on the health effects of nuclear weapon complexes, estimate that cancer fatalities due to the global radiation doses from the atmospheric nuclear testing programmes of the five nuclear-weapon States amount to hundreds of thousands. A 1991 study by the International Physicians for the Prevention of Nuclear War (IPPNW)estimated that the radiation and radioactive materials from atmospheric testing taken in by people up until the year 2000 would cause 430,000 cancer deaths, some of which had already occurred by the time the results were published. The study predicted that roughly 2.4 million people could eventually die from cancer as a result of atmospheric testing.

CHART 1 – EFFECTS OF RADIONUCLIDES

Radionuclide Half-life* Health hazards
Xenon
(Xe)
6.7 hours Inhalation in excessive concentrations can result in dizziness, nausea, vomiting, loss of consciousness, and death. At low oxygen concentrations, unconsciousness and death may occur in seconds without warning.
Americium-241
(241Am)
432 years Moves rapidly through the body after ingestion and is concentrated within the bones for a long period of time. During this storage americium will slowly decay and release radioactive particles and rays. These rays can cause alteration of genetic materials and bone cancer.
Iodine-131
(131I)
8 days When present in high levels in the environment from radioactive fallout, I-131 can be absorbed through contaminated food. It also accumulates in the thyroid gland, where it can destroy all or part of the thyroid. May cause damage to the thyroid as it decays. Thyroid cancer may occur.
Caesium-137
(137Cs)
30 years After entering the body, caesium is distributed fairly uniformly through the body, with higher concentration in muscle tissue and lower concentration in bones. Can cause gonadal irradiation and genetic damage.
Krypton-85
(85Kr)
10.76 years Inhalation in excessive concentrations can result in dizziness, nausea, vomiting, loss of consciousness, and death.
Strontium-90
(90Sr)
28 years A small amount of strontium 90 is deposited in bones and bone marrow, blood and soft tissues when ingested. Can cause bone cancer, cancer of nearby tissues, and leukaemia.
Plutonium-239
(239Pu)
24,400 years Released when a plutonium weapon is exploded. Ingestion of even a miniscule quantity is a serious health hazard and can cause lung, bone, and liver cancer. The highest doses are to the lungs, the bone marrow, bone surfaces, and liver.
Tritium
(3H)
12 years Easily ingested. Can be inhaled as a gas in the air or absorbed through the skin. Enters soft tissues and organs. Exposure to tritium increases the risk of developing cancer. Beta radiation emitted by tritium can cause lung cancer.

* ( i.e. amount of time it takes for half of the quantity of a radioactive material to decay)

Marie Curie won the Nobel Prize in chemistry in 1911 for her discovery of the elements radium and polonium. The curie unit is named after Marie and Pierre Curie, who conducted pioneering research on radiation.

Measuring radiation doses and biological risks

Scientists use different terms when measuring radiation. The terms can either refer to radiation from a radioactive source, the radiation dose absorbed by a person, or the risk that a person will suffer health effects from exposure to radiation. When a person is exposed to radiation, energy is deposited in the body’s tissues. The amount of energy deposited per unit of weight of human tissue is called the absorbed dose. This is measured using the rad or the SI Gy. The rad, which stands for radiation absorbed dose, has largely been replaced by the Gy. One Gy is equal to 100 rad.

The curie (symbol Ci) is a unit of radioactivity. It has largely been replaced by the Becquerel, which is the unit of radioactivity. One Becquerel is defined as the number of atoms which decay per second in a sample. The curie unit is named after Marie and Pierre Curie, who conducted pioneering research on radiation.

A person’s biological risk (i.e. the risk that a person will suffer health effects from an exposure to radiation) is measured using the conventional unit rem or the SI unit Sv.

CHART 2. EFFECTS OF DIFFERENT LEVELS OF RADIATION

Radiation dose in rems Health impact
5-20 Possible chromosomal damage.
20-100 Temporary reduction in number of white blood cells. Mild nausea and vomiting. Loss of appetite. Fatigue, which may last up to four weeks. Greater susceptibility to infection. Greater long-term risk of leukaemia and lymphoma is possible.
100-200 Mild radiation sickness within a few hours: vomiting, diarrhea, fatigue; reduced resistance to infection. Hair loss. In sufficient amounts, I-131 can destroy all or part of the thyroid gland, leading to thyroid abnormalities or cancer. Temporary male sterility.
200-300 Serious radiation sickness effects as in 100-200 rem. Body cells that divide rapidly can also be destroyed. These include blood cells, gastrointestinal tract cells, reproductive cells, and hair cells. DNA of surviving cells is also damaged.
300-400 Serious radiation sickness. Bone marrow and intestine destruction. Haemorraging of the mouth.
400-1000 Acute illness, possible heart failure. Bone marrow almost completely destroyed. Permanent female sterility probable.
1000-5000 Acute illness, nerve cells and small blood vessels are destroyed. Death can occur in days.

 

How Quantum Computers and Machine Learning Will Revolutionize Big Data (Wired)

BY JENNIFER OUELLETTE, QUANTA MAGAZINE

10.14.13

Image: infocux Technologies/Flickr

When subatomic particles smash together at the Large Hadron Collider in Switzerland, they create showers of new particles whose signatures are recorded by four detectors. The LHC captures 5 trillion bits of data — more information than all of the world’s libraries combined — every second. After the judicious application of filtering algorithms, more than 99 percent of those data are discarded, but the four experiments still produce a whopping 25 petabytes (25×1015 bytes) of data per year that must be stored and analyzed. That is a scale far beyond the computing resources of any single facility, so the LHC scientists rely on a vast computing grid of 160 data centers around the world, a distributed network that is capable of transferring as much as 10 gigabytes per second at peak performance.

The LHC’s approach to its big data problem reflects just how dramatically the nature of computing has changed over the last decade. Since Intel co-founder Gordon E. Moore first defined it in 1965, the so-called Moore’s law — which predicts that the number of transistors on integrated circuits will double every two years — has dominated the computer industry. While that growth rate has proved remarkably resilient, for now, at least, “Moore’s law has basically crapped out; the transistors have gotten as small as people know how to make them economically with existing technologies,” said Scott Aaronson, a theoretical computer scientist at the Massachusetts Institute of Technology.

Instead, since 2005, many of the gains in computing power have come from adding more parallelism via multiple cores, with multiple levels of memory. The preferred architecture no longer features a single central processing unit (CPU) augmented with random access memory (RAM) and a hard drive for long-term storage. Even the big, centralized parallel supercomputers that dominated the 1980s and 1990s are giving way to distributed data centers and cloud computing, often networked across many organizations and vast geographical distances.

These days, “People talk about a computing fabric,” said Stanford University electrical engineerStephen Boyd. These changes in computer architecture translate into the need for a different computational approach when it comes to handling big data, which is not only grander in scope than the large data sets of yore but also intrinsically different from them.

The demand for ever-faster processors, while important, isn’t the primary focus anymore. “Processing speed has been completely irrelevant for five years,” Boyd said. “The challenge is not how to solve problems with a single, ultra-fast processor, but how to solve them with 100,000 slower processors.” Aaronson points out that many problems in big data can’t be adequately addressed by simply adding more parallel processing. These problems are “more sequential, where each step depends on the outcome of the preceding step,” he said. “Sometimes, you can split up the work among a bunch of processors, but other times, that’s harder to do.” And often the software isn’t written to take full advantage of the extra processors. “If you hire 20 people to do something, will it happen 20 times faster?” Aaronson said. “Usually not.”

Researchers also face challenges in integrating very differently structured data sets, as well as the difficulty of moving large amounts of data efficiently through a highly distributed network.

Those issues will become more pronounced as the size and complexity of data sets continue to grow faster than computing resources, according to California Institute of Technology physicist Harvey Newman, whose team developed the LHC’s grid of data centers and trans-Atlantic network. He estimates that if current trends hold, the computational needs of big data analysis will place considerable strain on the computing fabric. “It requires us to think about a different kind of system,” he said.

Memory and Movement

Emmanuel Candes, an applied mathematician at Stanford University, was once able to crunch big data problems on his desktop computer. But last year, when he joined a collaboration of radiologists developing dynamic magnetic resonance imaging — whereby one could record a patient’s heartbeat in real time using advanced algorithms to create high-resolution videos from limited MRI measurements — he found that the data no longer fit into his computer’s memory, making it difficult to perform the necessary analysis.

Addressing the storage-capacity challenges of big data is not simply a matter of building more memory, which has never been more plentiful. It is also about managing the movement of data. That’s because, increasingly, the desired data is no longer at people’s fingertips, stored in a single computer; it is distributed across multiple computers in a large data center or even in the “cloud.”There is a hierarchy to data storage, ranging from the slowest, cheapest and most abundant memory to the fastest and most expensive, with the least available space. At the bottom of this hierarchy is so-called “slow memory” such as hard drives and flash drives, the cost of which continues to drop. There is more space on hard drives, compared to the other kinds of memory, but saving and retrieving the data takes longer. Next up this ladder comes RAM, which is must faster than slow memory but offers less space is more expensive. Then there is cache memory — another trade-off of space and price in exchange for faster retrieval speeds — and finally the registers on the microchip itself, which are the fastest of all but the priciest to build, with the least available space. If memory storage were like real estate, a hard drive would be a sprawling upstate farm, RAM would be a medium-sized house in the suburbs, cache memory would be a townhouse on the outskirts of a big city, and the register memory would be a tiny studio in a prime urban location.

Longer commutes for stored data translate into processing delays. “When computers are slow today, it’s not because of the microprocessor,” Aaronson said. “The microprocessor is just treading water waiting for the disk to come back with the data.” Big data researchers prefer to minimize how much data must be moved back and forth from slow memory to fast memory. The problem is exacerbated when the data is distributed across a network or in the cloud, because it takes even longer to move the data back and forth, depending on bandwidth capacity, so that it can be analyzed.

One possible solution to this dilemma is to embrace the new paradigm. In addition to distributed storage, why not analyze the data in a distributed way as well, with each unit (or node) in a network of computers performing a small piece of a computation? Each partial solution is then integrated to find the full result. This approach is similar in concept to the LHC’s, in which one complete copy of the raw data (after filtering) is stored at the CERN research facility in Switzerland that is home to the collider. A second copy is divided into batches that are then distributed to data centers around the world. Each center analyzes its chunk of data and transmits the results to regional computers before moving on to the next batch.

Alon Halevy, a computer scientist at Google, says the biggest breakthroughs in big data are likely to come from data integration.Image: Peter DaSilva for Quanta Magazine

Boyd’s system is based on so-calledconsensus algorithms. “It’s a mathematical optimization problem,” he said of the algorithms. “You are using past data to train the model in hopes that it will work on future data.” Such algorithms are useful for creating an effective SPAM filter, for example, or for detecting fraudulent bank transactions.

This can be done on a single computer, with all the data in one place. Machine learning typically uses many processors, each handling a little bit of the problem. But when the problem becomes too large for a single machine, a consensus optimization approach might work better, in which the data set is chopped into bits and distributed across 1,000 “agents” that analyze their bit of data and each produce a model based on the data they have processed. The key is to require a critical condition to be met: although each agent’s model can be different, all the models must agree in the end — hence the term “consensus algorithms.”

The process by which 1,000 individual agents arrive at a consensus model is similar in concept to the Mechanical Turk crowd-sourcing methodology employed by Amazon — with a twist. With the Mechanical Turk, a person or a business can post a simple task, such as determining which photographs contain a cat, and ask the crowd to complete the task in exchange for gift certificates that can be redeemed for Amazon products, or for cash awards that can be transferred to a personal bank account. It may seem trivial to the human user, but the program learns from this feedback, aggregating all the individual responses into its working model, so it can make better predictions in the future.

In Boyd’s system, the process is iterative, creating a feedback loop. The initial consensus is shared with all the agents, which update their models in light of the new information and reach a second consensus, and so on. The process repeats until all the agents agree. Using this kind of distributed optimization approach significantly cuts down on how much data needs to be transferred at any one time.

The Quantum Question

Late one night, during a swanky Napa Valley conference last year, MIT physicist Seth Lloyd found himself soaking in a hot tub across from Google’s Sergey Brin and Larry Page — any aspiring technology entrepreneur’s dream scenario. Lloyd made his pitch, proposing a quantum version of Google’s search engine whereby users could make queries and receive results without Google knowing which questions were asked. The men were intrigued. But after conferring with their business manager the next day, Brin and Page informed Lloyd that his scheme went against their business plan. “They want to know everything about everybody who uses their products and services,” he joked.

It is easy to grasp why Google might be interested in a quantum computer capable of rapidly searching enormous data sets. A quantum computer, in principle, could offer enormous increases in processing power, running algorithms significantly faster than a classical (non-quantum) machine for certain problems. Indeed, the company just purchased a reportedly $15 million prototype from a Canadian firm called D-Wave Systems, although the jury is still out on whether D-Wave’s product is truly quantum.

“This is not about trying all the possible answers in parallel. It is fundamentally different from parallel processing,” said Aaronson. Whereas a classical computer stores information as bits that can be either 0s or 1s, a quantum computer could exploit an unusual property: the superposition of states. If you flip a regular coin, it will land on heads or tails. There is zero probability that it will be both heads and tails. But if it is a quantum coin, technically, it exists in an indeterminate state of both heads and tails until you look to see the outcome.

A true quantum computer could encode information in so-called qubits that can be 0 and 1 at the same time. Doing so could reduce the time required to solve a difficult problem that would otherwise take several years of computation to mere seconds. But that is easier said than done, not least because such a device would be highly sensitive to outside interference: The slightest perturbation would be equivalent to looking to see if the coin landed heads or tails, and thus undo the superposition.

Data from a seemingly simple query about coffee production across the globe can be surprisingly difficult to integrate. Image: Peter DaSilva for Quanta Magazine

However, Aaronson cautions against placing too much hope in quantum computing to solve big data’s computational challenges, insisting that if and when quantum computers become practical, they will be best suited to very specific tasks, most notably to simulate quantum mechanical systems or to factor large numbers to break codes in classical cryptography. Yet there is one way that quantum computing might be able to assist big data: by searching very large, unsorted data sets — for example, a phone directory in which the names are arranged randomly instead of alphabetically.

It is certainly possible to do so with sheer brute force, using a massively parallel computer to comb through every record. But a quantum computer could accomplish the task in a fraction of the time. That is the thinking behind Grover’s algorithm, which was devised by Bell Labs’ Lov Grover in 1996. However, “to really make it work, you’d need a quantum memory that can be accessed in a quantum superposition,” Aaronson said, but it would need to do so in such a way that the very act of accessing the memory didn’t destroy the superposition, “and that is tricky as hell.”

In short, you need quantum RAM (Q-RAM), and Lloyd has developed a conceptual prototype, along with an accompanying program he calls a Q-App (pronounced “quapp”) targeted to machine learning. He thinks his system could find patterns within data without actually looking at any individual records, thereby preserving the quantum superposition (and the users’ privacy). “You can effectively access all billion items in your database at the same time,” he explained, adding that “you’re not accessing any one of them, you’re accessing common features of all of them.”

For example, if there is ever a giant database storing the genome of every human being on Earth, “you could search for common patterns among different genes” using Lloyd’s quantum algorithm, with Q-RAM and a small 70-qubit quantum processor while still protecting the privacy of the population, Lloyd said. The person doing the search would have access to only a tiny fraction of the individual records, he said, and the search could be done in a short period of time. With the cost of sequencing human genomes dropping and commercial genotyping services rising, it is quite possible that such a database might one day exist, Lloyd said. It could be the ultimate big data set, considering that a single genome is equivalent to 6 billion bits.

Lloyd thinks quantum computing could work well for powerhouse machine-learning algorithms capable of spotting patterns in huge data sets — determining what clusters of data are associated with a keyword, for example, or what pieces of data are similar to one another in some way. “It turns out that many machine-learning algorithms actually work quite nicely in quantum computers, assuming you have a big enough Q-RAM,” he said. “These are exactly the kinds of mathematical problems people try to solve, and we think we could do very well with the quantum version of that.”

The Future Is Integration

“No matter how much you speed up the computers or the way you put computers together, the real issues are at the data level.”

Google’s Alon Halevy believes that the real breakthroughs in big data analysis are likely to come from integration — specifically, integrating across very different data sets. “No matter how much you speed up the computers or the way you put computers together, the real issues are at the data level,” he said. For example, a raw data set could include thousands of different tables scattered around the Web, each one listing crime rates in New York, but each may use different terminology and column headers, known as “schema.” A header of “New York” can describe the state, the five boroughs of New York City, or just Manhattan. You must understand the relationship between the schemas before the data in all those tables can be integrated.

That, in turn, requires breakthroughs in techniques to analyze the semantics of natural language. It is one of the toughest problems in artificial intelligence — if your machine-learning algorithm aspires to perfect understanding of nearly every word. But what if your algorithm needs to understand only enough of the surrounding text to determine whether, for example, a table includes data on coffee production in various countries so that it can then integrate the table with other, similar tables into one common data set? According to Halevy, a researcher could first use a coarse-grained algorithm to parse the underlying semantics of the data as best it could and then adopt a crowd-sourcing approach like a Mechanical Turk to refine the model further through human input. “The humans are training the system without realizing it, and then the system can answer many more questions based on what it has learned,” he said.

Chris Mattmann, a senior computer scientist at NASA’s Jet Propulsion Laboratory and director at theApache Software Foundation, faces just such a complicated scenario with a research project that seeks to integrate two different sources of climate information: remote-sensing observations of the Earth made by satellite instrumentation and computer-simulated climate model outputs. The Intergovernmental Panel on Climate Change would like to be able to compare the various climate models against the hard remote-sensing data to determine which models provide the best fit. But each of those sources stores data in different formats, and there are many different versions of those formats.

Many researchers emphasize the need to develop a broad spectrum of flexible tools that can deal with many different kinds of data. For example, many users are shifting from traditional highly structured relational databases, broadly known as SQL, which represent data in a conventional tabular format, to a more flexible format dubbed NoSQL. “It can be as structured or unstructured as you need it to be,” said Matt LeMay, a product and communications consultant and the former head of consumer products at URL shortening and bookmarking service Bitly, which uses both SQL and NoSQL formats for data storage, depending on the application.

Mattmann cites an Apache software program called Tika that allows the user to integrate data across 1,200 of the most common file formats. But in some cases, some human intervention is still required. Ultimately, Mattmann would like to fully automate this process via intelligent software that can integrate differently structured data sets, much like the Babel Fish in Douglas Adams’ “Hitchhiker’s Guide to the Galaxy” book series enabled someone to understand any language.

Integration across data sets will also require a well-coordinated distributed network system comparable to the one conceived of by Newman’s group at Caltech for the LHC, which monitors tens of thousands of processors and more than 10 major network links. Newman foresees a computational future for big data that relies on this type of automation through well-coordinated armies of intelligent agents, that track the movement of data from one point in the network to another, identifying bottlenecks and scheduling processing tasks. Each might only record what is happening locally but would share the information in such a way as to shed light on the network’s global situation.

“Thousands of agents at different levels are coordinating to help human beings understand what’s going on in a complex and very distributed system,” Newman said. The scale would be even greater in the future, when there would be billions of such intelligent agents, or actors, making up a vast global distributed intelligent entity. “It’s the ability to create those things and have them work on one’s behalf that will reduce the complexity of these operational problems,” he said. “At a certain point, when there’s a complicated problem in such a system, no set of human beings can really understand it all and have access to all the information.”

Predicting the Future Could Improve Remote-Control of Space Robots (Wired)

BY ADAM MANN

10.15.13

A new system could make space exploration robots faster and more efficient by predicting where they will be in the very near future.

The engineers behind the program hope to overcome a particular snarl affecting our probes out in the solar system: that pesky delay caused by the speed of light. Any commands sent to a robot on a distant body take a certain amount of time to travel and won’t be executed for a while. By building a model of the terrain surrounding a rover and providing an interface that lets operators forecast the how the probe will move around within it, engineer can identify potential obstacles and make decisions nearer to real-time.

“You’re reacting quickly, and the rover is staying active more of the time,” said computer scientist Jeff Norris, who leads mission operation innovations at the Jet Propulsion Laboratory’s Ops Lab.

As an example, the distance between Earth and Mars creates round-trip lags of up to 40 minutes. Nowadays, engineers send a long string of commands once a day to robots like NASA’s Curiosity rover. These get executed but then the rover has to stop and wait until the next instructions are beamed down.

Because space exploration robots are multi-million or even multi-billion-dollar machines, they have to work very carefully. One day’s commands might tell Curiosity to drive up to a rock. It will then check that it has gotten close enough. Then, the following day, if will be instructed to place its arm on that rock. Later on, it might be directed to drill into or probe this rock with its instruments. While safe, this method is very inefficient.

“When we only send commands once a day, we’re not dealing with 10- or 20-minute delays. We’re dealing with a 24-hour round trip,” said Norris.

Norris’ lab wants to make the speed and productivity of distant probes better. Their interface simulates more or less where a robot would be given a particular time delay. This is represented by a small ghostly machine — called the “committed state” — moving just ahead of a rover. The ghosted robot is the software’s best guess of where the probe would end up if operators hit the emergency stop button right then.

By looking slightly into the future, the interface allows a rover driver to update decisions and commands at a much faster rate than is currently possible. Say a robot on Mars is commanded to drive forward 100 meters. But halfway there, its sensors notice an interesting rock that scientists want to investigate. Rather than waiting for the rover to finish its drive and then commanding it to go back, this new interface would give operators the ability to write and rewrite their directions on the fly.

The simulation can’t know every detail around a probe and so provides a small predictive envelope as to where the robot might be. Different terrains have different uncertainties.

“If you’re on loose sand, that might be different than hard rock,” said software engineer Alexander Menzies, who works on the interface.

Menzies added that when they tested the interface, users had an “almost game-like experience” trying to optimize commands for a robot. He designed an actual video game where participants were given points for commanding a time-delayed robot through a slalom-like terrain. (Norris lamented that he had the highest score on that game until the last day of testing, when Menzies beat him.)

The team thinks that aspects of this new interface could start to be used in the near future, perhaps even with the current Mars rovers Curiosity and Opportunity. At this point, though, Mars operations are limited by bandwidth. Because there are only a few communicating satellites in orbit on the Red Planet, commands can only be sent a few times a day, reducing a lot of the efficiency that would be gained from this new system. But operations on the moon or a potential asteroid capture and exploration mission – such as the one NASA is currently planning – would likely be in more constant communication with Earth, providing even faster and more efficient operations that could take advantage of this new time-delay-reducing system.

Video: OPSLabJPL/Youtube

This Is How Cats See the World (Wired)

BY NADIA DRAKE

10.16.13







The blurriness at the edge of the photos represents the area of peripheral vision in humans (20 degrees, top) and cats (30 degrees, bottom). 

No one ever talks about what the world looks like if you’re a cat. Instead, we speak of the bird’s-eye view and use fish-eye lenses to make things look weird.

But we rarely consider how the internet’s favorite subject sees the world. Luckily, artist Nickolay Lammhas volunteered to act as cat-vision conduit. Here, Lamm presents his idea of what different scenes might look like if you were a cat, taking into consideration the way feline eyes work, and using input from veterinarians and ophthalmologists.

For starters, cats’ visual fields are broader than ours, spanning roughly 200 degrees instead of 180 degrees, and their visual acuity isn’t as good. So, the things humans can sharply resolve at distances of 100-200 feet look blurry to cats, which can see these objects at distances of up to 20 feet. That might not sound so great, but there’s a trade-off: Because of the various photoreceptors parked in cats’ retinas, they kick our asses at seeing in dim light. Instead of the color-resolving, detail-loving cone cells that populate the center of human retinas, cats (and dogs) have many more rod cells, which excel in dim light and are responsible for night-vision capability. The rod cells also refresh more quickly, which lets cats pick up very rapid movements — like, for example, the quickly shifting path a marauding laser dot might trace.

Lastly, cats see colors differently than we do, which is why the cat-versions of these images look less vibrant than the people-versions. Scientists used to think cats were dichromats — able to only see two colors — but they’re not, exactly. While feline photoreceptors are most sensitive to wavelengths in the blue-violet and greenish-yellow ranges, it appears they might be able to see a little bit of green as well. In other words, cats are mostly red-green color blind, as are many of us, with a little bit of green creeping in.

All Photos: Nickolay Lamm, in consultation with Kerry L. Ketring, DVM, DACVO of All Animal Eye Clinic, Dr. DJ Haeussler of The Animal Eye Institute, and the Ophthalmology group at Penn Vet.

Some Monkeys Have Conversations That Resemble Ours (Wired)

BY BRANDON KEIM

10.17.13

A pair of common marmosets. Image: Bart van Dorp/Flickr

The sounds of marmoset monkeys chattering may hint at the mysterious origins of human language.

A new study shows that marmosets exchange calls in a precisely timed, back-and-forth fashion typical of human conversation, but not found in other primates. The monkeys don’t appear to have a language, but the timing suggests the foundations of our own.

“That could be the foundation of more sophisticated things, like syntax,” said psychologist Asif Ghazanfar of Princeton University, co-author of the study, which was published today in Current Biology. “You can’t have any of those other really cool aspects of language without first having this.”

‘If you went back 10 million years, you’d be hard-pressed to predict that an ape would end up with the planet’s most complex vocal communication.’

How language, so complex and information-rich, evolved in Homo sapiens and, as far as we know, no other species, is one of anthropology’s outstanding questions. The traditional, seemingly intuitive answer is that it arose from the vocalizations of ancestors who were capable of a few rudimentary noises and wanted to say more.

Confounding that narrative, though, is the comparatively less-vocal nature of many other primates, including our closest living relatives, chimpanzees and bonobos. They do vocalize, of course, and even say some interesting things, but not with the same flow expected of some proto-human linguistic capability.

That conundrum has led researchers to propose another possible origin of language, one rooted not in our voices but rather our bodies, and in particular our hands. According to this narrative, gesture would have been as important to our ancestors as sound. Indeed, neurological processes underlying speech and language are also intimately linked with motor skills, raising the possibility that language formed on the cognitive scaffold of gesture — and chimpanzees do have a large repertoire of hand movements.

But many scientists, including Ghazanfar and the study’s lead author, fellow Princeton psychologist Daniel Takahashi, aren’t convinced. If human language did follow on gesture, they wonder, why don’t chimps talk more? There’s also no evidence in chimpanzees for vocal turn-taking, or waiting for another person to finish speaking before replying, which is universal in human languages. “If we don’t take turns, if we’re overlapping, it’s very difficult to understand each other,” said Ghazanfar. “Turn-taking is foundational.”

Yet even if chimps don’t take turns, Ghazanfar and Takahashi found that marmosets do. In the new study, they placed pairs of marmosets in the opposite corners of a room, separated by a curtain that allowed them to hear but not see each other, and recorded the ensuing chatter.

These proved to follow turn-taking patterns, with a pause of several seconds between the completion of one monkey’s whistles and the other’s beginning. And unlike the duets of birds, which are often highly synchronized, the exchanges had nothing to do with mating or territoriality. The monkeys were conversing.

Monkey Conversation:
Whistles encoding information about the caller’s identity are exchanged back and forth according to rules of timing also found in human conversation.

Audio: Takahashi et al./Ethology

As for what they said, marmoset whistles are thought to encode information about a caller’s identity, age, gender and location. Ghazanfar thinks the conversations are a sort of “vocal grooming,” a way of easing stress or conveying affection, but delivered at a distance. It only works when monkeys know they’re being addressed individually, which is conveyed by the turn-taking form.

“It could be a pre-adaptation for language,” said evolutionary biologist Thore Bergman of the University of Michigan, who was not involved in the study. Bergman’s own research involveshuman-sounding lip smacks made by monkeys called geladas.

As for why marmosets and humans take turns, but not chimpanzees, Ghazanfar suspects it’s a function of our social systems. Marmosets are cooperative breeders: Group members take care of offspring unrelated to them, creating community-oriented dynamics of behavior and communication. Ancestral humans may have lived the same way.

Without a time machine, of course, questions about the origin of human language won’t ever be settled. As Bergmann noted, the findings don’t exclude the possible importance of gesture. It’s possible that human language arose from the fortuitous interactions of gesture, vocalization and social structure with evolutionary pressure.

Indeterminacy aside, though, it’s fun to speculate, and also to wonder whether the seeds of complex language now exist in animals other than ourselves. Many whales and dolphins, along with syntax-using monkeys and even prairie dogs, communicate in very sophisticated ways.

“If you went back 10 million years, you’d be hard-pressed to predict an ape would end up with the planet’s most complex vocal communication system,” said Thore Bergman. “Why that happened is a really big puzzle.”

Citation: “Coupled Oscillator Dynamics of Vocal Turn-Taking in Monkeys.” By Daniel Takahashi, Darshana Narayanan and Asif Ghazanfar. Current Biology, 17 October 2013.

Skull Fossil Suggests Simpler Human Lineage (New York Times)

The 1.8-million-year-old skull was found during a dig in the Republic of Georgia. Georgian National Museum

By 

Published: October 17, 2013

After eight years spent studying a 1.8-million-year-old skull uncovered in the Republic of Georgia, scientists have made a discovery that may rewrite the evolutionary history of our human genus Homo.

A Simpler Family Tree?

A skull discovered in the Republic of Georgia may indicate that early species in the genus Homo, right, are actually more closely related members of a single evolutionary lineage.

 

An artist’s rendition of what the original owner of Skull 5 may have looked like.

Skull 5, which was discovered alongside the remains of four other hominids in Dmanisi, Georgia. Courtesy of Guram Bumbiashvili, Georgian National Museum

An aerial view of the Dmanisi excavation site (foreground) and a medieval town. Fernando Javier Urquijo

It would be a simpler story with fewer ancestral species. Early, diverse fossils — those currently recognized as coming from distinct species like Homo habilis, Homo erectus and others — may actually represent variation among members of a single, evolving lineage.

In other words, just as people look different from one another today, so did early hominids look different from one another, and the dissimilarity of the bones they left behind may have fooled scientists into thinking they came from different species.

This was the conclusion reached by an international team of scientists led by David Lordkipanidze, a paleoanthropologist at the Georgian National Museum in Tbilisi, as reported Thursday in the journal Science.

The key to this revelation was a cranium excavated in 2005 and known simply as Skull 5, which scientists described as “the world’s first completely preserved adult hominid skull” of such antiquity. Unlike other Homo fossils, it had a number of primitive features: a long, apelike face, large teeth and a tiny braincase, about one-third the size of that of a modern human being. This confirmed that, contrary to some conjecture, early hominids did not need big brains to make their way out of Africa.

The discovery of Skull 5 alongside the remains of four other hominids at Dmanisi, a site in Georgia rich in material of the earliest hominid travels into Eurasia, gave the scientists an opportunity to compare and contrast the physical traits of ancestors that apparently lived at the same location and around the same time.

Dr. Lordkipanidze and his colleagues said the differences between these fossils were no more pronounced than those between any given five modern humans or five chimpanzees. The hominids who left the fossils, they noted, were quite different from one another but still members of one species.

“Had the braincase and the face of Skull 5 been found as separate fossils at different sites in Africa, they might have been attributed to different species,” a co-author of the journal report, Christoph Zollikofer of the University of Zurich, said in a statement. Such was often the practice of researchers, using variations in traits to define new species.

Although the Dmanisi finds look quite different from one another, Dr. Zollikofer said, the hominids who left them were living at the same time and place, and “so could, in principle, represent a single population of a single species.” He and his Zurich colleague, Marcia Ponce de León, conducted the comparative analysis of the Dmanisi specimens.

“Since we see a similar pattern and range of variation in the African fossil record,” Dr. Zollikofer continued, “it is sensible to assume that there was a single Homo species at that time in Africa.” Moreover, he added, “since the Dmanisi hominids are so similar to the African ones, we further assume that they both represent the same species.”

But what species? Some team members simply call their finds “early Homo.” Others emphasized the strong similarities to Homo erectus, which lived between two million and less than one million years ago. Tim D. White, a paleoanthropologist at the University of California, Berkeley, called it “the most primitive H. erectus yet known,” noting that “it is more similar than any other yet found to early Homo from eastern Africa,” a group of hominids estimated to have lived 2.3 million years ago.

All five of the skulls and skeletal bones were found in underground dens, suggesting grisly scenes from the perilous lives these early Homos led. They resided among carnivores, including saber-toothed cats and an extinct giant cheetah. All five of the individuals had probably been attacked and killed by the carnivores, their carcasses dragged into the dens for the after-hunt feast, with nothing left but dinner scraps for curious fossil hunters.

Dr. White and other scientists not involved in the research hailed the importance of the skull discovery and its implications for understanding early Homo evolution. In an article analyzing the report, Science quoted Ian Tattersall of the American Museum of Natural History in New York as saying that the skull “is undoubtedly one of the most important ever discovered.”

A few scientists quibbled that the skull looks more like Homo habilis or questioned the idea that fossils in Africa all belong to Homo erectus, but there was broad recognition that the new findings were a watershed in the study of evolution. “As the most complete early Homo skull ever found,” Dr. White wrote in an e-mail, “it will become iconic for Dmanisi, for earliest Homo erectus and more broadly for how we became human.”

Dr. White, who has excavated hominid fossils in Ethiopia for years, said he was impressed with “the total evidentiary package from the site that is the really good news story here.” Further, he said, he hoped the discovery would “now focus the debate on evolutionary biology beyond the boring ‘lumpers vs. splitters’ ” — a reference to the tendencies of fossil hunters to either lump new finds into existing species or split them off into new species.

In their report, the Dmanisi researchers said the Skull 5 individual “provides the first evidence that early Homo comprised adult individuals with small brains but body mass, stature and limb proportions reaching the lower range limit of modern variation.”

Skeletal bones associated with the five Dmanisi skulls show that these hominids were short in stature, but that their limbs enabled them to walk long distances as fully upright bipeds. The shape of the small braincase distinguished them from the more primitive Australopithecus genus, which preceded Homo and lived for many centuries with Homo in Africa.

Could Ants Teach the Biofuel Industry a Thing or Two? (Quest)

Post by  , Producer for on Sep 26, 2013

Leafcutter ants, native to Central and South America, can't digest the leaves they rely on for food, so they cultivate these gardens of fungi and bacteria to break down plant matter for them.

Leafcutter ants, native to Central and South America, can’t digest the leaves they rely on for food, so they cultivate these gardens of fungi and bacteria to break down plant matter for them. Photo courtesy of Alex Wild; used with permission.

In the lobby of the Microbial Sciences building at the University of Wisconsin, leafcutter ants in adisplay colony hike back and forth. Improbably large leaf fragments wobble on their backs as the ants ferry them between a dwindling pile of oak leaves and a garden of fungus studded with leaves in assorted states of decay.

Made up of a single species of fungus and a handful of bacterial strains, the fungus garden breaks down the ants’ leafy harvest through an efficient natural process. It’s a process that researchers believe could be a model for producing biofuel in a more sustainable way.

As we transition away from petroleum dependence, ethanol-based biofuel has risen to the forefront as one of the most accessible sources of renewable energy. It’s produced by fermenting plant sugars, which are strung together into long chains called polysaccharides. Before the fermentation process can begin, these chains have to be snipped apart, a process that varies in difficulty depending on the type of plant being used.

Polysaccharide chains found in corn kernels — the primary biofuel crop in the U.S. — are relatively simple to break up. But corn depletes the soil and guzzles water and fertilizer, and using it for fuel siphons calories from the food supply to gas tanks.

On the other hand, perennial grasses and agricultural “waste” like cornstalks offer a biofuel source that has a lighter impact on the environment. But these woodier fibers — referred to as“cellulosic” biomass — are a tangle of robust polysaccharides that are trickier to deconstruct. Further complicating this problem, the molecular structure of plant biomass isn’t uniform. What breaks down the polysaccharides near the surface of a cornstalk or blade of grass might not work at all on those buried more deeply.

DSC_0025

University of Wisconsin researcher Frank Aylward peers into one of the lab’s many leafcutter ant colonies.

But finding efficient ways to extract energy from plants and other forms of biomass is not a new problem. In fact, it’s a problem that Earth’s plant eaters solved millions of years ago. And according to University of Wisconsin researcher Frank Aylward, if you’re looking for a model system, you can’t do better than leafcutter ants.

They may not have the imposing mien of herbivores like giraffes or elephants, but in Central and South America, leafcutter ants dominate, munching through more of the region’s foliage than any other organism.

But the ants can’t digest leaves by themselves — they have to rely on the garden’s microbes. “We sort of think of the fungus gardens as being an external gut,” Aylward explains. The garden digests biomass and reconstitutes its molecules in little nutrient packets holding a cocktail of carbohydrates, lipids, and proteins.

“The ants are essentially doing what we want to do with biofuel,” says Aylward. “They’re taking all of this recalcitrant plant biomass that’s full of all of these really complicated polymers and they’re degrading it and converting it into energy.” The transformation from leafy greens to energy source is mediated by hundreds of enzymes produced by the fungus garden’s microbes. If these enzymes chow down so efficiently on the leaves of Central America, Aylward and his coworkers wondered, could they be just as effective at breaking apart the sugars of cellulosic biomass in an industrial setting?

One model for a commercial biofuel process patterned after the fungus garden could entail splicing the genetic codes for the garden’s most effective enzymes into other microbes, prompting them to churn out biomass-digesting proteins.

But first, scientists needed to identify which enzymes the garden uses to digest leaves for the ants and which microbial residents produce them. By sequencing the genomes of the fungus and bacteria and comparing that data to the garden’s enzyme soup, Aylward and his coworkers were able to identify a fungus called Leucoagaricus gongylophorus as the garden’s biomass-degrading workhorse.

Aylward extracts a fragment of the fungus garden. This segment was near the surface, and still shows visible leaf matter; the biomass  in the garden sinks as it's broken down.

Aylward extracts a fragment of the fungus garden. This segment was near the surface, and still shows visible leaf matter; the biomass in the garden sinks as it’s broken down.

They also found that the fungus calibrates its enzyme cocktail for different stages of leaf decay. The biomass profile changes at each level in the garden — the freshest leaves sit near the top and the mostly decomposed waste material at the bottom. And Aylward found that the garden’s enzymes changed, too. That insight could provide the biofuel industry with some clues about which enzymes might excel early in the polysaccharide-decomposition process and which ones to apply later on.

Incidentally, this division of labor also reveals which enzymes the garden deploys together at each level. This is a huge boon to anyone designing industrial applications, since enzymes tend to work much better in specific combinations — and the garden has had 50 million years of symbiosis with the ants to find the most efficient combinations.

Aylward has already been approached by companies interested in synthesizing some of the garden’s enzymes and using them in biofuel production.

“It’s difficult to think that we can actually find a process that improves on nature,” Aylward points out, “so it probably makes sense to learn from it.”

Ice Cap Shows Ancient Mines Polluted the Globe (New York Times)

By MALCOLM W. BROWNE

Published: December 09, 1997

SAMPLES extracted from Greenland’s two-mile-deep ice cap have yielded evidence that ancient Carthaginian and Roman silver miners working in southern Spain fouled the global atmosphere with lead for some 900 years.

The Greenland ice cap accumulates snow year after year, and substances from the atmosphere are entrapped in the permanent ice. From 1990 to 1992, a drill operated by the European Greenland Ice-Core Project recovered a cylindrical ice sample 9,938 feet long, pieces of which were distributed to participating laboratories. The ages of successive layers of the ice cap have been accurately determined, so the chemical makeup of the atmosphere at any given time in the past 9,000 years can be estimated by analyzing the corresponding part of the core sample.

Using exquisitely sensitive techniques to measure four different isotopes of lead in the Greenland ice, scientists in Australia and France determined that most of the man-made lead pollution of the atmosphere in ancient times had come from the Spanish provinces of Huelva, Seville, Almeria and Murcia. Isotopic analysis clearly pointed to the rich silver-mining and smelting district of Rio Tinto near the modern city of Nerva as the main polluter.

The results of this study were reported in the current issue of Environmental Science & Technology by Dr. Kevin J. R. Rosman of Curtin University in Perth, Australia, and his colleagues there and at the Laboratory of Glaciology and Geophysics of the Environment in Grenoble, France.

One of the problems in their analyses, the authors wrote, was the very low concentrations of lead remaining in ice dating from ancient times — only about one-hundredth the lead level found in Greenland ice deposited in the last 30 years. But the investigators used mass-spectrometric techniques that permitted them to sort out isotopic lead composition at lead levels of only about one part per trillion.

Dr. Rosman focused on the ratio of two stable isotopes, or forms, of lead: lead-206 and lead-207. His group found that the ratio of lead-206 to lead-207 in 8,000-year-old ice was 1.201. That was taken as the natural ratio that existed before people began smelting ores. But between 600 B.C. and A.D. 300, the scientists found, the ratio of lead-206 to lead-207 fell to 1.183. They called that ”unequivocal evidence of early large-scale atmospheric pollution by this toxic metal.”

All ore bodies containing lead have their own isotopic signatures, and the Rio Tinto lead ratio is 1.164. Calculations by the Australian-French collaboration based on their ice-core analysis showed that during the period 366 B.C. to at least A.D. 36, a period when the Roman Empire was at its peak, 70 percent of the global atmospheric lead pollution came from the Roman-operated Rio Tinto mines in what is now southwestern Spain.

The Rio Tinto mining region is known to archeologists as one of the richest sources of silver in the ancient world. Some 6.6 million tons of slag were left by Roman smelting operations there.

The global demand for silver increased dramatically after coinage was introduced in Greece around 650 B.C. But silver was only one of the treasures extracted from its ore. The sulfide ore smelted by the Romans also yielded an enormous harvest of lead.

Because it is easily shaped, melted and molded, lead was widely used by the Romans for plumbing, stapling masonry together, casting statues and manufacturing many kinds of utensils. All these uses presumably contributed to the chronic poisoning of Rome’s peoples.

Adding to the toxic hazard, Romans used lead vessels to boil and concentrate fruit juices and preserves. Fruits contain acetic acid, which reacts with metallic lead to form lead acetate, a compound once known as ”sugar of lead.” Lead acetate adds a pleasant sweet taste to food but causes lead poisoning — an ailment that is often fatal and, even in mild cases, causes debilitation and loss of cognitive ability.

Judging from the Greenland ice core, the smelting of lead-bearing ore declined sharply after the fall of the Roman Empire but gradually increased during the Renaissance. By 1523, the last year for which Dr. Rosman’s group conducted its Greenland ice analysis, atmospheric lead pollution had reached nearly the same level recorded for the year 79 B.C., at the peak of Roman mining pollution.

How Scott Collis Is Harnessing New Data To Improve Climate Models (Popular Science)

The former ski bum built open-access tools that convert raw data from radar databases into formats that climate modelers can use to better predict climate change.

By Veronique Greenwood and Valerie Ross

Posted 10.16.2013 at 3:00 pm

Scott Collis (by Joel Kimmel)

Each year, Popular Science seeks out the brightest young scientists and engineers and names them the Brilliant Ten. Like the 110 honorees before them, the members of this year’s class are dramatically reshaping their fields–and the future. Some are tackling pragmatic questions, like how to secure the Internet, while others are attacking more abstract ones, like determining the weather on distant exoplanets. The common thread between them is brilliance, of course, but also impact. If the Brilliant Ten are the faces of things to come, the world will be a safer, smarter, and brighter place.–The Editors

Scott Collis

Argonne National Laboratory

Achievement

Harnessing new data to improve climate models

Clouds are one of the great challenges for climate scientists. They play a complex role in the atmosphere and in any potential climate-change scenario. But rudimentary data has simplified their role in simulations, leading to variability among climate models. Scott Collis discovered a way to add accuracy to forecasts of future climate—by tapping new sources of cloud data.

Collis has extensive experience watching clouds, first as a ski bum during grad school in Australia and then as a professional meteorologist. But when he took a job at the Centre for Australian Weather and Climate Research, he realized there was an immense source of cloud data that climate modelers weren’t using: the information collected for weather forecasts. So Collis took on the gargantuan task of building open-access tools that convert the raw data from radar databases into formats that climate modelers can use. In one stroke, Collis unlocked years of weather data. “We were able to build such robust algorithms that they could work over thousands of radar volumes without human intervention,” says Collis.

When the U.S. Department of Energy caught wind of his project, it recruited him to work with a new radar network designed to collect high-quality cloud data from all over the globe. The network, the largest of its kind, isn’t complete yet, but already the data that Collis and his collaborators have collected is improving next-generation climate models.

Click here to see more from our annual celebration of young researchers whose innovations will change the world. This article originally appeared in the October 2013 issue of Popular Science.

Tool Accurately Predicts Whether A Kickstarter Project Will Bomb (Popular Science)

At about 76 percent accuracy, a new prediction model is the best yet. “Your chances of success are at 8 percent. Commence panic.”

By Colin Lecher

Posted 10.16.2013 at 2:00 pm

 

Ouya, A Popular Kickstarter Project 

Well, here’s something either very discouraging or very exciting for crowdfunding hopefuls: a Swiss team can predict, with about 76 percent accuracy and within only four hours of launch, whether a Kickstarter project will succeed.

The team, from the university École Polytechnique Fédérale de Lausanne, laid out a system in a paper presented at the Conference on Online Social Networks. By mining data on more than 16,000 Kickstarter campaigns and more than 1.3 million users, they created a prediction model based on the project’s popularity on Twitter, the rate of cash it’s getting, how many first-time backers it has, and the previous projects supporters have backed.

A previous, similar model built by Americans could predict a Kicktarter project’s success with 68 percent accuracy–impressive, but the Swiss project has another advantage: it’s dynamic. While the American model could only make a prediction before the project launched, the Swiss project monitors projects in real time. They’ve even built a tool, called Sidekick, that monitors projects and displays their chances of success.

Other sites, like Kicktraq, offer similar services, but the predictions aren’t as accurate as the Swiss team claims theirs are. If you peruse Sidekick, you can see how confident the algorithm is in its pass/fail predictions: almost all of the projects are either above 90 percent or below 10 percent. Sort of scary, probably, if you’re launching a project. Although there’s always a chance you could pull yourself out of the hole, it’s like a genie asking if you want to know how you die: Do you really want that information?

[Guardian]

Carbon Cycle Models Underestimate Indirect Role of Animals (Science Daily)

Oct. 16, 2013 — Animal populations can have a far more significant impact on carbon storage and exchange in regional ecosystems than is typically recognized by global carbon models, according to a new paper authored by researchers at the Yale School of Forestry & Environmental Studies (F&ES). 

Wildebeests herd, Serengeti. Scientists found that a decline in wildebeest populations in the Serengeti-Mara grassland-savanna system decades ago allowed organic matter to accumulate, which eventually led to about 80 percent of the ecosystem to burn annually, releasing carbon from the plants and the soil, before populations recovered in recent years. (Credit: © photocreo / Fotolia)

In fact, in some regions the magnitude of carbon uptake or release due to the effects of specific animal species or groups of animals — such as the pine beetles devouring forests in western North America — can rival the impact of fossil fuel emissions for the same region, according to the paper published in the journal Ecosystems.

While models typically take into account how plants and microbes affect the carbon cycle, they often underestimate how much animals can indirectly alter the absorption, release, or transport of carbon within an ecosystem, says Oswald Schmitz, the Oastler Professor of Population and Community Ecology at F&ES and lead author of the paper. Historically, the role of animals has been largely underplayed since animal species are not distributed globally and because the total biomass of animals is vastly lower than the plants that they rely upon, and therefore contribute little carbon in the way of respiration.

“What these sorts of analyses have not paid attention to is what we call the indirect multiplier effects,” Schmitz says. “And these indirect effects can be quite huge — and disproportionate to the biomass of the species that are instigating the change.”

In the paper, “Animating the Carbon Cycle,” a team of 15 authors from 12 universities, research organizations and government agencies cites numerous cases where animals have triggered profound impacts on the carbon cycle at local and regional levels.

In one case, an unprecedented loss of trees triggered by the pine beetle outbreak in western North America has decreased the net carbon balance on a scale comparable to British Columbia’s current fossil fuel emissions.

And in East Africa, scientists found that a decline in wildebeest populations in the Serengeti-Mara grassland-savanna system decades ago allowed organic matter to accumulate, which eventually led to about 80 percent of the ecosystem to burn annually, releasing carbon from the plants and the soil, before populations recovered in recent years.

“These are examples where the animals’ largest effects are not direct ones,” Schmitz says. “But because of their presence they mitigate or mediate ecosystem processes that then can have these ramifying effects.”

“We hope this article will inspire scientists and managers to include animals when thinking of local and regional carbon budgets,” said Peter Raymond, a professor of ecosystem ecology at the Yale School of Forestry & Environmental Studies.

According to the authors, a more proper assessment of such phenomena could provide insights into management schemes that could help mitigate the threat of climate change.

For example, in the Arctic, where about 500 gigatons of carbon is stored in permafrost, large grazing mammals like caribou and muskoxen can help maintain the grasslands that have a high albedo and thus reflect more solar energy. In addition, by trampling the ground these herds can actually help reduce the rate of permafrost thaw, researchers say.

“It’s almost an argument for rewilding places to make sure that the natural balance of predators and prey are there,” Schmitz says. “We’re not saying that managing animals will offset these carbon emissions. What we’re trying to say is the numbers are of a scale where it is worthwhile to start thinking about how animals could be managed to accomplish that.”

Journal Reference:

  1. Oswald J. Schmitz, Peter A. Raymond, James A. Estes, Werner A. Kurz, Gordon W. Holtgrieve, Mark E. Ritchie, Daniel E. Schindler, Amanda C. Spivak, Rod W. Wilson, Mark A. Bradford, Villy Christensen, Linda Deegan, Victor Smetacek, Michael J. Vanni, Christopher C. Wilmers.Animating the Carbon CycleEcosystems, 2013; DOI:10.1007/s10021-013-9715-7

Software Uses Cyborg Swarm to Map Unknown Environs (Science Daily)

Oct. 16, 2013 — Researchers from North Carolina State University have developed software that allows them to map unknown environments — such as collapsed buildings — based on the movement of a swarm of insect cyborgs, or “biobots.”

Researchers from North Carolina State University have developed software that allows them to map unknown environments — such as collapsed buildings — based on the movement of a swarm of insect cyborgs, or “biobots.” (Credit: Image by Edgar Lobaton.)

“We focused on how to map areas where you have little or no precise information on where each biobot is, such as a collapsed building where you can’t use GPS technology,” says Dr. Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and senior author of a paper on the research.

“One characteristic of biobots is that their movement can be somewhat random,” Lobaton says. “We’re exploiting that random movement to work in our favor.”

Here’s how the process would work in the field. A swarm of biobots, such as remotely controlled cockroaches, would be equipped with electronic sensors and released into a collapsed building or other hard-to-reach area. The biobots would initially be allowed to move about randomly. Because the biobots couldn’t be tracked by GPS, their precise locations would be unknown. However, the sensors would signal researchers via radio waves whenever biobots got close to each other.

Once the swarm has had a chance to spread out, the researchers would send a signal commanding the biobots to keep moving until they find a wall or other unbroken surface — and then continue moving along the wall. This is called “wall following.”

The researchers repeat this cycle of random movement and “wall following” several times, continually collecting data from the sensors whenever the biobots are near each other. The new software then uses an algorithm to translate the biobot sensor data into a rough map of the unknown environment.

“This would give first responders a good idea of the layout in a previously unmapped area,” Lobaton says.

The software would also allow public safety officials to determine the location of radioactive or chemical threats, if the biobots have been equipped with the relevant sensors.

The researchers have tested the software using computer simulations and are currently testing the program with robots. They plan to work with fellow NC State researcher Dr. Alper Bozkurt to test the program with biobots.

The paper, “Topological Mapping of Unknown Environments using an Unlocalized Robotic Swarm,” will be presented at the International Conference on Intelligent Robots and Systems being held Nov. 3-8 in Tokyo, Japan. Lead author of the paper is Alireza Dirafzoon, a Ph.D. student at NC State. The work was supported by National Science Foundation grant CNS-1239243.

Economic Dangers of ‘Peak Oil’ Addressed (Science Daily)

Oct. 16, 2013 — Researchers from the University of Maryland and a leading university in Spain demonstrate in a new study which sectors could put the entire U.S. economy at risk when global oil production peaks (“Peak Oil”). This multi-disciplinary team recommends immediate action by government, private and commercial sectors to reduce the vulnerability of these sectors.

The figure above shows sectors’ importance and vulnerability to Peak Oil. The bubbles represent sectors. The size of the bubbles visualizes the vulnerability of a particular sector to Peak Oil according to the expected price changes; the larger the size of the bubble, the more vulnerable the sector is considered to be. The X axis shows a sector’s importance according to its contribution to GDP and on the Y axis according to its structural role. Hence, the larger bubbles in the top right corner represent highly vulnerable and highly important sectors. In the case of Peak Oil induced supply disruptions, these sectors could cause severe imbalances for the entire U.S. economy. (Credit: Image courtesy of University of Maryland)

While critics of Peak Oil studies declare that the world has more than enough oil to maintain current national and global standards, these UMD-led researchers say Peak Oil is imminent, if not already here — and is a real threat to national and global economies. Their study is among the first to outline a way of assessing the vulnerabilities of specific economic sectors to this threat, and to identify focal points for action that could strengthen the U.S. economy and make it less vulnerable to disasters.

Their work, “Economic Vulnerability to Peak Oil,” appears inGlobal Environmental Change. The paper is co-authored by Christina Prell, UMD’s Department of Sociology; Kuishuang Feng and Klaus Hubacek, UMD’s Department of Geographical Sciences, and Christian Kerschner, Institut de Ciència i Tecnologia Ambientals, Universitat Autònoma de Barcelona.

A focus on Peak Oil is increasingly gaining attention in both scientific and policy discourses, especially due to its apparent imminence and potential dangers. However, until now, little has been known about how this phenomenon will impact economies. In their paper, the research team constructs a vulnerability map of the U.S. economy, combining two approaches for analyzing economic systems. Their approach reveals the relative importance of individual economic sectors, and how vulnerable these are to oil price shocks. This dual-analysis helps identify which sectors could put the entire U.S. economy at risk from Peak Oil. For the United States, such sectors would include iron mills, chemical and plastic products manufacturing, fertilizer production and air transport.

“Our findings provide early warnings to these and related industries about potential trouble in their supply chain,” UMD Professor Hubacek said. “Our aim is to inform and engage government, public and private industry leaders, and to provide a tool for effective Peak Oil policy action planning.”

Although the team’s analysis is embedded in a Peak Oil narrative, it can be used more broadly to develop a climate roadmap for a low carbon economy.

“In this paper, we analyze the vulnerability of the U.S. economy, which is the biggest consumer of oil and oil-based products in the world, and thus provides a good example of an economic system with high resource dependence. However, the notable advantage of our approach is that it does not depend on the Peak-Oil-vulnerability narrative but is equally useful in a climate change context, for designing policies to reduce carbon dioxide emissions. In that case, one could easily include other fossil fuels such as coal in the model and results could help policy makers to identify which sectors can be controlled and/or managed for a maximum, low-carbon effect, without destabilizing the economy,” Professor Hubacek said.

One of the main ways a Peak Oil vulnerable industry can become less so, the authors say, is for that sector to reduce the structural and financial importance of oil. For example, Hubacek and colleagues note that one approach to reducing the importance of oil to agriculture could be to curbing the strong dependence on artificial fertilizers by promoting organic farming techniques and/or reducing the overall distance travelled by people and goods by fostering local, decentralized food economies.

Peak Oil Background and Impact

The Peak Oil dialogue shifts attention away from discourses on “oil depletion” and “stocks” to focus on declining production rates (flows) of oil, and increasing costs of production. The maximum possible daily flow rate (with a given technology) is what eventually determines the peak; thus, the concept can also be useful in the context of other renewable resources.

Improvements in extraction and refining technologies can influence flows, but this tends to lead to steeper decline curves after the peak is eventually reached. Such steep decline curves have also been observed for shale gas wells.

“Shale developments are, so we believe, largely overrated, because of the huge amounts of financial resources that went into them (danger of bubble) and because of their apparent steep decline rates (shale wells tend to peak fast),” according to Dr. Kerschner.

“One important implication of this dialogue shift is that extraction peaks occur much earlier in time than the actual depletion of resources,” Professor Hubacek said. “In other words, Peak Oil is currently predicted within the next decade by many, whereas complete oil depletion will in fact occur never given increasing prices. This means that eventually petroleum products may be sold in liter bottles in pharmacies like in the old days. ”

Journal Reference:

  1. Christian Kerschner, Christina Prell, Kuishuang Feng, Klaus Hubacek. Economic vulnerability to Peak OilGlobal Environmental Change, 2013; DOI:10.1016/j.gloenvcha.2013.08.015

Cockroach farms multiplying in China (L.A.Times)

Dried cockroaches are ready to be sold to pharmaceutical companies from a farm in Jinan, China. One farmer says the insects are easy to raise and profitable.

Farmers are pinning their future on the often-dreaded insect, which when dried goes for as much as $20 a pound — for use in Asian medicine and in cosmetics.

BY BARBARA DEMICK

PHOTOGRAPHY AND VIDEO BY WANG XUHUA

REPORTING FROM JINAN, CHINA

Oct. 15, 2013

This squat concrete building was once a chicken coop, but now it’s part of a farm with an entirely different kind of livestock — millions of cockroaches.

Inside, squirming masses of the reddish-brown insects dart between sheets of corrugated metal and egg cartons that have been tied together to provide the kind of dark hiding places they favor.

Wang Fuming kneels down and pulls out one of the nests. Unaccustomed to the light, the roaches scurry about, a few heading straight up his arm toward his short-sleeve shirt.

“Nothing to be afraid of,” Wang counsels visitors who are shrinking back into the hallway, where stray cockroaches cling to a ceiling that’s perilously close overhead.

Although cockroaches evoke a visceral dread for most people, Wang looks at them fondly as his fortune — and his future.

People laughed at me when I started, but I always thought that cockroaches would bring me wealth.”

— Zou Hui, cockroach farmer

The 43-year-old businessman is the largest cockroach producer in China (and thus probably in the world), with six farms populated by an estimated 10 million cockroaches. He sells them to producers of Asian medicine and to cosmetic companies that value the insects as a cheap source of protein as well as for the cellulose-like substance on their wings.

The favored breed for this purpose is the Periplaneta americana, or American cockroach, a reddish-brown insect that grows to about 1.6 inches long and, when mature, can fly, as opposed to the smaller, darker, wingless German cockroach.

Since Wang got into the business in 2010, the price of dried cockroaches has increased tenfold, from about $2 a pound to as much as $20, as manufacturers of traditional medicine stockpile pulverized cockroach powder.

“I thought about raising pigs, but with traditional farming, the profit margins are very low,” Wang said. “With cockroaches, you can invest 20 yuan and get back 150 yuan,” or $3.25 for a return of $11.

China has about 100 cockroach farms, and new ones are opening almost as fast as the prolific critters breed. But even among Chinese, the industry was little known until August, when a million cockroaches got out of a farm in neighboring Jiangsu province. The Great Escape made headlines around China and beyond, evoking biblical images of swarming locusts.

Big moneymaker

Business is booming at the Shandong Xin Da Ground Beetle Farm.

Only the prospect of all those lost earnings would faze Wang, a compact man with a wisp of a mustache and wire-rim glasses who looks like a scientist, but has no more than a high school education. After graduating, he went to work in a tire factory.

“I felt I would never get anywhere in life at the factory and I wanted to start a business,” he said.

As a boy he had liked collecting insects, so he started with scorpions and beetles, both used in traditional medicine and served as a delicacy. One batch of his beetle eggs turned out to be contaminated with cockroach eggs.

“I was accidentally raising cockroaches and then I realized they were the easiest and most profitable,” he said.

The start-up costs are minimal — Wang bought only eggs, a run-down abandoned chicken coop and the roofing tile. Notoriously hearty, roaches aren’t susceptible to the same diseases as farm animals. As for feeding them, cockroaches are omnivores, though they favor rotten vegetables. Wang feeds his brood with potato and pumpkin peelings discarded from nearby restaurants.

Cockroaches are survivors. We want to know what makes them so strong.”

— Li Shunan, professor of traditional medicine

Killing them is easy too: Just scoop or vacuum them out of their nests and dunk them in a big vat of boiling water. Then they’re dried in the sun like chile peppers.

Perhaps understandably, the cockroach business (“special farming,” as it is euphemistically called) is a fairly secretive industry. Wang’s farm, for instance, operates in an agribusiness industrial park under an elevated highway. The sign at the front gate simply reads Jinan Hualu Feed Co.

Some companies that use cockroaches don’t like to advertise their “secret ingredient.” And the farmers themselves are wary of neighbors who might not like a cockroach farm in their backyard.

“We try to keep a low profile,” said Liu Yusheng, head of the Shandong Insect Industry Assn., the closest thing there is to a trade organization. “The government is tacitly allowing us to do what we do, but if there is too much attention, or if cockroach farms are going into residential areas, there could be trouble.”

Liu worries about the rapid growth of an industry with too many inexperienced players and too little oversight. In 2007, a million Chinese lost $1.2 billion when a firm promoting ant farming turned out to be a Ponzi scheme and went bankrupt.

“This is not like raising regular farm animals or vegetables where the Agricultural Ministry knows who is supposed to regulate it. Nobody knows who is in charge here,” he said.

The low start-up costs make raising cockroaches an appealing business for wannabe entrepreneurs, who can buy cockroach eggs and complete how-to kits from promoters.

“People laughed at me when I started, but I always thought that cockroaches would bring me wealth,” said Zou Hui, 40, who quit her job at a knitting factory in 2008 after seeing a television program about raising cockroaches.

Wang Fuming, at his farm in Jinan, is the largest cockroach producer in China (and thus probably in the world), with six farms populated by an estimated 10 million cockroaches.

It’s not exactly a fortune, but the $10,000 she brings in annually selling cockroaches is decent money for her hometown in rural Sichuan province, and won her an award last year from local government as an “Expert in Getting Wealthy.”

“Now I’m teaching four other families,” Zou said. “They want to get rich like me.”

But inexperienced farmers can get into trouble, as Wang Pengsheng (no relation to fellow roach farmer Wang) found out after his cockroaches staged the Great Escape.

He had opened his farm just six months earlier in a newly constructed building that municipal code officials complained was too close to protected watershed land. At noon on Aug. 20, while workers were out for lunch, a demolition crew knocked down the building. The roaches made a run for it.

“They didn’t know I had cockroaches in there. They wouldn’t have demolished the building like that if there were cockroaches that would get out,” Wang Pengsheng said in a telephone interview.

After discovering the flattened building and homeless roaches scurrying among the rubble, he tried to corral the escapees but was unsuccessful. He called in local health officials, who helped him exterminate the roaches. Wang said he has received about $8,000 in compensation from local government and hopes to use the money to rebuild his farm elsewhere.

At least five pharmaceutical companies are using cockroaches for traditional Chinese medicine. Research is underway in China (and South Korea) on the use of pulverized cockroaches for treating baldness, AIDS and cancer and as a vitamin supplement. South Korea’s Jeonnam Province Agricultural Research Institute and China’s Dali University College of Pharmacy have published papers on the anti-carcinogenic properties of the cockroach.

Li Shunan, a 78-year-old professor of traditional medicine from the southwestern province of Yunnan who is considered the godfather of cockroach research, said he discovered in the 1960s that ethnic minorities near the Vietnamese border were using a cockroach paste to treat bone tuberculosis.

“Cockroaches are survivors,” Li said. “We want to know what makes them so strong — why they can even resist nuclear effects.”

Liu Yusheng, head of the Shandong Insect Industry Assn. eats fried cockroaches. Liu worries about the rapid growth of an industry with too many inexperienced players and too little oversight.

Li reels off an impressive, if implausible, list of health claims: “I lost my hair years ago. I made a spray of cockroaches, applied it on my scalp and it grew back. I’ve used it as a facial mask and people say I haven’t changed at all over the years.

“Cockroaches are very tasty too.”

Many farmers are hoping to boost demand by promoting cockroaches in fish and animal feed and as a delicacy for humans.

Chinese aren’t quite as squeamish as most Westerners about insects — after all, people here still keep crickets as pets.

In Jinan, Wang Fuming and his wife, who run the farm together, seem genuinely fond of their cockroaches and a little hurt that others don’t feel affection.

“What is disgusting about them?” Li Wanrong, Wang’s wife, asked as a roach scurried around her black leather pumps. “Look how beautiful they are. So shiny!”

Over lunch at a restaurant down the block from his farm, Wang placed a plate of fried cockroaches seasoned with salt on the table along with more conventional cuisine, and proceeded to nibble a few with his chopsticks. He expressed disapproval that visiting journalists refused to sample the roaches.

On saying goodbye at the end of the day, he added a final rejoinder.

“You will regret your whole life not trying them.”

Nicole Liu in The Times’ Beijing bureau contributed to this report.


FOR THE RECORD:Wednesday’s Column One story about cockroach farming in China misstated the value of 150 Chinese yuan as $11. It is equal to $24.

Salvamento de Beagles usados como cobaias no Instituto Royal

JC e-mail 4839, de 22 de outubro de 2013

Especialista da Fiocruz considera equívoco invasão ao Instituto Royal (Jornal da Ciência)

Para Marco Aurélio Martins, o ataque de ativistas aos experimentos científicos é uma tentativa de desinformar “irresponsavelmente” a população

É preocupante a invasão “equivocada” de grupos defensores de animais ao Instituto Royal, levando 178 cães da raça beagle, além de outras cobaias científicas. A afirmação é do pesquisador chefe do Laboratório de Inflamação da Fundação Oswaldo Cruz (Fiocruz), Marco Aurélio Martins. “É preocupante pelo discurso equivocado sobre a importância que a pesquisa tem”, diz ele, em entrevista ao Jornal da Ciência. A invasão aconteceu na madrugada da última sexta-feira (18), na instituição instalada em São Roque, a 51 km de São Paulo.

Para ele, o ataque de ativistas aos experimentos científicos é uma tentativa de desinformar “irresponsavelmente” a população em geral, leiga dos conhecimentos científicos. “Passar para população de que a experimentação animal é algo simplesmente cruel, que agride os animais, que só faz mal a eles sem nenhum benefício nem para os seres humanos, nem para os próprios animais, é desinformar”, declara.

Martins reforça que o uso de animais nos experimentos científicos ainda é necessário para estudar várias áreas da saúde pública,desde as doenças tropicais, como malária e outras mais graves, como câncer, asma e hipertensão. “Como podemos abrir mão de estudar esses problemas tão complexos se não tivermos ferramentas experimentais?”, pergunta. “Todos os medicamentos disponíveis nas prateleiras das farmácias e no mercado veterinário dependeram da experimentação animal, em algum momento.”

O pesquisador insiste em dizer que todos os testes científicos com animais obedecem às normas nacionais, previstas na Lei Arouca Nº 11.794, em vigor há três anos. De acordo com ele, o uso de animais nos experimentos científicos não é exclusividade do Brasil. Conforme entende o biólogo, todos os países avançados em ciência e tecnologia permanecem usando os animais em experimentação. “Não é verdadeiro dizer que não se faz mais uso de animais na Europa e nos Estados Unidos”, diz. A restrição é maior (apenas) para primatas, como macacos e chimpanzés.”

JC – O senhor conhece a política do Instituto Royal aplicada nos experimentos científicos de animais?
Martins – Sou ligado a um instituto nacional de ciência e tecnologia de fármacos, INCT-Inofar, do qual o Royal é um dos colaboradores. Conheço a reputação e a seriedade do Instituto. Mas nunca o visitei e nunca utilizei o centro como prestador de serviços.

Qual a sua avaliação sobre a invasão dos ativistas ao Instituto Royal?
Vejo com muita preocupação. É uma radicalização. Já tivemos iniciativas semelhantes no Brasil no passado, mas nada tão veemente. Na própria Fiocruz, por volta de 2000, houve uma invasão, quando pesquisadores foram processados pelo fato de gambás serem encontrados fora da caixa deles. Mas nunca vi algo tão radical, como agora, de ver o pessoal entrar e liberar os animais. Me preocupa muito este momento, no qual o Brasil vive uma tensão social, de manifestações, como os Black Blocs. Já vimos esse filme em outros países, em que esse ativismo levou a problemas enormes, de agressividade.

Esse cenário preocupa a área científica?
Preocupa pela desinformação irresponsável. Passar para a população em geral, leiga, de que a experimentação animal é algo simplesmente cruel, que agride aos animais, que só faz mal aos animais sem nenhum benefício para os seres humanos e nem para os próprios animais. Isso é desinformar. Não é difícil sensibilizar, sobretudo, as pessoas que não sabem como as pesquisas são realizadas. Ou informar, equivocamente, de que apenas o Brasil é o único país que utiliza os animais em experimentos científicos. Preocupa o discurso equivocado sobre a importância que a pesquisa tem. Os profissionais da ciência do Brasil se deparam hoje com uma responsabilidade muito grande. Temos de ser muito hábeis e contar com a colaboração da imprensa para que as palavras não sejam deturpadas. É preciso ter cuidado de passar para a população em geral, de tranquilizá-la, de que os centros de pesquisas estabelecidos no Brasil são de excelência, não são centros de terror.

Quais os benefícios que o experimento cientifico com animal traz para a população e para os próprios animais?
Todos os medicamentos disponíveis nas prateleiras das farmácias e no mercado veterinário dependeram da experimentação animal, em algum momento. O risco de não fazermos isso, de não fazer os experimentos é enorme para a população na hora de disponibilizar os potenciais medicamentos.

Os experimentos científicos com animais precisam atender à legislação interna…
Claro que a comunidade científica sabe que precisa seguir as regras. Somos obrigados a obter licenças, existem leis que controlam a experimentação animal, tanto no Brasil como no mundo. No Brasil, a legislação é a Lei Arouca, em vigor há três anos. No caso, se houvesse uma denúncia de maus tratos na Fiocruz ou mesmo no Instituto Royal, o Concea [Conselho Nacional de Controle de Experimentação Animal] tem o papel de receber a denúncia, de avaliar e investigar para tomar as atitudes. Os maus-tratos de animais de experimentação são passíveis de criminalização. Se tiver acontecendo irregularidade, isso tem de ser exemplarmente punido. O que não pode é autorizar que as pessoas saiam invadindo o local e liberando animais de experimentação. Isso trará prejuízos não apenas para o andamento das pesquisas científicas, mas para a credibilidade do desenvolvimento de novos fármacos no país, para a população e para os próprios animais. Se é que existem maus tratos aos animais que isso seja levado aos órgãos competentes e que se puna quem tiver agindo de maneira errada.

É o caso do Instituto Royal?
Não acredito que seja. Pelo que conheço sobre a reputação das pessoas responsáveis não tenho razão nenhuma para acreditar que tivesse ocorrendo algum tipo de irregularidade interna. Se tivesse acontecendo, numa hipótese terrível, hoje a nossa sociedade já dispõe de um canal, que é Concea.

As pesquisas ainda são necessárias com os animais?
Claro que são, porque precisamos de mecanismos para avançar nas formas de tratamento (de saúde) que temos hoje, na terapia. Ainda temos problemas enormes em várias áreas da saúde pública, desde as doenças tropicais, como malária e outras mais graves, como câncer, asma e hipertensão. Como podemos abrir mão de estudar esses problemas tão complexos se não tivermos ferramentas experimentais? Como impedir cientistas e especialistas, dentro das condições de boas práticas e de boa conduta ética, de entender as doenças e buscar uma forma de controlá-las? Isso seria interromper a investigação científica. Não se pode passar para a opinião pública a ideia de que não se pode mais usar os animais em experimentos científicos.

Outros países ainda usam animais em experimentos científicos?
Claro que usam. Todos os países considerados avançados em ciência e tecnologia continuam usando os animais em experimentação. Não é verdadeiro dizer que não se faz mais uso de animais na Europa ou nos Estados Unidos. A restrição é maior (apenas) para primatas, como macacos e chimpanzés.

Os protocolos proíbem a crueldade nos animais?
Não pode haver crueldade. Isso é crime. Ao montar um protocolo experimental o pesquisador tem de garantir que o animal esteja dentro das condições de bem estar, para que possa, inclusive, acreditar nos resultados a serem obtidos da experimentação.

(Viviane Monteiro – Jornal da Ciência)

Outras matérias sobre o assunto:

Revista Galileu

‘Um dia reduziremos. Mas acabar com testes em animais agora é impossível’

http://revistagalileu.globo.com/Revista/Common/0,,EMI344225-17770,00-UM+DIA+REDUZIREMOS+MAS+ACABAR+COM+TESTES+EM+ANIMAIS+AGORA+E+IMPOSSIVEL.html

Folha de S.Paulo

Retirada de cães de instituto afeta pesquisa anticâncer, diz cientista

http://www1.folha.uol.com.br/fsp/cotidiano/135127-retirada-de-caes-de-instituto-afeta-pesquisa-anticancer-diz-cientista.shtml

Experimentação animal

http://www1.folha.uol.com.br/fsp/opiniao/135050-experimentacao-animal.shtml

Deputado fica com ‘guarda’ e dá nome de filhas a beagles

http://www1.folha.uol.com.br/fsp/cotidiano/135124-deputado-fica-com-guarda-e-da-nome-de-filhas-a-beagles.shtml

O Globo

Ministério Público de SP espera investigação da polícia para decidir sobre beagles

http://oglobo.globo.com/pais/ministerio-publico-de-sp-espera-investigacao-da-policia-para-decidir-sobre-beagles-10467368#ixzz2iSUmfYZr

O Estado de S.Paulo

Ladrões de cobaias

http://www.estadao.com.br/noticias/impresso,ladroes-de-cobaias-,1088290,0.htm

Instituto doará beagles que forem recuperados

http://www.estadao.com.br/noticias/impresso,instituto-doara-beagles-que-forem-recuperados,1088254,0.htm

Zero Hora

Sentimentalismo e direitos dos animais

http://wp.clicrbs.com.br/opiniaozh/2013/10/22/sentimentalismo-e-direitos-dos-animais/?topo=13,1,1,,,13

Agência Câmara Notícias

Comissão investigará denúncias de maus-tratos contra animais no Instituto Royal

http://www2.camara.gov.br/camaranoticias/noticias/CIENCIA-E-TECNOLOGIA/455160-COMISSAO-INVESTIGARA-DENUNCIAS-DE-MAUS-TRATOS-CONTRA-ANIMAIS-NO-INSTITUTO-ROYAL.html

*   *   *

22/10/2013 – 03h00

Retirada de cães de instituto afeta teste anticâncer, diz cientista (Folha de S.Paulo)

JAIRO MARQUES e RAFAEL GARCIA

DE SÃO PAULO

A retirada de 178 cães da raça beagle de um laboratório em São Roque (a 66 km de São Paulo) comprometeu experimentos avançados de um medicamento para tratamento contra câncer –além de fitoterápicos para usos diversos.

A informação é do médico Marcelo Marcos Morales, um dos secretários da Sociedade Brasileira para o Progresso da Ciência e coordenador do Concea (Conselho Nacional de Controle de Experimentação Animal), ligado ao Ministério da Ciência e Tecnologia.

“Um trabalho que demorou anos para ser produzido, que tinha resultados promissores para o desenvolvimento do país, foi jogado no lixo”, disse ele, em referência à invasão do Instituto Royal por ativistas na semana passada.

“O prejuízo é incalculável para a ciência e para o benefício das pessoas”, afirmou.

O cientista não revelou o nome do medicamento desenvolvido, que é protegido por contrato, nem para qual tipo de câncer ele seria usado. Mas informou que se tratava de um tipo de remédio produzido fora do país e que teve a patente quebrada.

Sala é encontrada com objetos revirados no Instituto Royal, em São Roque (SP)

O Royal também não detalha os experimentos alegando restrição contratual.

Os fitoterápicos eram baseados em plantas da flora nacional e poderiam ser usados, por exemplo, para combater dor e inflamações.

Ativistas dizem que os cães sofriam maus-tratos. O instituto nega. Ontem ele disse que, quando recuperados, receberão tratamento e podem “ser colocados para doação”.

Doutor em biofísica, Morales afirma que os cientistas “também não querem trabalhar com animais”, mas que o método é ainda o mais eficaz para testes de tratamentos médicos e vacinas.

“Seria possível não nos alimentarmos mais com carne? Com pesquisa é a mesma relação. Deixamos de usar animais e vamos testar vacinas em nossas crianças?”

Para Morales, as pessoas estão “confundindo” animais domésticos com cães que nasceram dentro de biotérios, sob condições controladas e rígidas para o uso científico.

“O apelo do cão é muito grande, tanto é que levaram todos os beagles, mas deixaram todos os ratos.”

A autoridade brasileira responsável por aprovar pesquisas com humanos, a Conep (Comissão Nacional de Ética em Pesquisa), não avaliza projetos de drogas que não tenham passado por testes de segurança em animais.

Cachorros estão em uma parcela pequena de experimentos científicos –nos quais os camundongos respondem por 74% dos animais. A maioria dos cães é usada para averiguar a toxicidade de medicamentos.

Editoria de arte/Folhapress