Arquivo mensal: outubro 2014

Falta de chuva reforça necessidade de usinas nucleares, dizem especialistas (Agência Brasil)

Especialistas participaram do 3º Seminário sobre Energia Nuclear, na Universidade Estadual do Rio de Janeiro (UERJ)

A falta de chuva em diversas regiões do país, principalmente no Sudeste, aponta para a necessidade de se prosseguir com os investimentos em usinas nucleares. A seca, além de afetar o fornecimento de água para a população, também compromete a geração de energia das usinas hidrelétricas, aumentando a importância das nucleares. A avaliação é de especialistas que participaram do 3º Seminário sobre Energia Nuclear, na Universidade Estadual do Rio de Janeiro (UERJ), iniciado ontem, 7, e que se encerra nesta quarta-feira, 8.

O presidente das Indústrias Nucleares do Brasil (INB), Aquilino Senra, frisou que a matriz energética brasileira é muito baseada na hidreletricidade, que vem sendo afetada pelas reiteradas e prolongadas secas nos últimos anos.

“No Brasil, a produção hídrica contribui com 92% de toda energia gerada. Os 8% restantes vêm de uma complementação térmica, na qual a nuclear tem um papel de 4%. Essa situação de baixos reservatórios levará a uma tomada de decisão mais rápida sobre a expansão da produção de energia nuclear. É inevitável, nas próximas décadas, um potencial de crescimento nuclear”, disse Senra.

O supervisor da Gerência de Análise de Segurança Nuclear da Eletronuclear, Edson Kuramoto, disse que a menor quantidade de chuva nos últimos anos forçou o governo a utilizar totalmente as usinas térmicas, incluindo as nucleares, para garantir o fornecimento. “Hoje está demonstrado que a matriz energética brasileiras é hidrotérmica.

Desde 2012, com a redução das chuvas, os reservatórios estão baixos e as térmicas foram despachadas justamente para complementar a falta da geração hidráulica. A energia nuclear tem que ser lembrada, porque o Brasil domina o ciclo e nós temos grandes reservas do combustível”, disse Kuramoto.

Segundo Kuramoto, além das usinas Angra 1 e 2, já em funcionamento, e Angra 3, em construção, o país precisará de pelo menos mais quatro usinas nucleares, sendo duas no Nordeste e duas no Sudeste. “O potencial de hidrelétricas que temos ainda é no Norte do país, mas está difícil o licenciamento de novas usinas com reservatórios. No passado, nossas hidrelétricas suportavam um recesso de chuvas de seis ou sete meses, hoje é três meses. Então o país vai ter que investir nas usinas térmicas. Até 2030, finda o nosso potencial hidráulico. A partir daí, o Brasil terá de construir novas térmicas, sejam nucleares, a gás, óleo combustível ou carvão.”

Segundo o presidente da INB, o Brasil tem garantidas reservas de urânio pelos próximos 120 anos pelo menos. Isso garante um custo baixo do combustível, que ainda tem a vantagem de não emitir gases de efeito estufa. Para Senra, a questão da segurança, muito questionada por causa do acidente da Usina de Fukushima, no Japão, já está solucionada com as novas gerações de usinas.

“Os reatores de Fukushima são de segunda geração. Os que estão começando a ser instalados agora são de terceira geração e neles não ocorreriam acidentes como os que já ocorreram, seja em 1979, nos Estados Unidos [em Three Mile Island, Pensilvânia], ou em 1986, em Chernobil [Ucrânia], e em 2011, em Fukishima”, explicou Senra.

(Vladimir Platonow/Agência Brasil)

http://agenciabrasil.ebc.com.br/geral/noticia/2014-10/falta-de-chuva-reforca-necessidade-de-usinas-nucleares-dizem-especialistas

Dowsers in the military (Ohio Buckeye Dowsers website)

Accessed Oct 6, 2014

General Rommel of the German Army- Don Nolan

http://www.tamar-dowsers.co.uk/articles/history.htm

General Patton “(U.S. Army). General Patton had a complete willow tree flown to Morocco so that a dowser could use branches from it to find water to replace the wells the German Army had blown up. The British army used dowsers on the Falkland Island to remove mines.”

– Don Nolan

http://www.tamar-dowsers.co.uk/articles/history.htm

“General Patten had two young men from Tennessee transferred to his unit. It is said that an Army moves on it’s belly, I suggest that it and it’s machines need water as well. Without these water wells we would have lost our butts on that front.”

http://www.oocities.org/dowser.geo/dowse.html

Vernon Cameron, “a dowser, told Navy officials, where all the U.S. and other submarines were located by map dowsing. They would not confirm or deny his findings, but a few years later he wasdenied a passport because he was considered a security risk.”

– Don Nolan

http://www.tamar-dowsers.co.uk/articles/history.htm

Hanna Kroeger – “…for years Cal-Tech was teaching the use of the pendulum to especially bright and interested graduate students. …So let’s join the smart and intelligent crowd and use the pendulum.”

http://www.zhealthinfo.com/pendulum.html  

Louis Matachia – “…in the late 1960’s, a dowser named Louis Matachia did demonstrate dowsing at Quantico, on a mock-up of a Vietnamese village. However, I don’t believe he ever “trained” the Marines in dowsing, or that dowsing was ever officially sanctioned by any service.”

http://forums.randi.org/archive/index.php/t-205.html

“In the USA, Louis J Matacia is a surveyor who has studied dowsing for years.  During the Vietnam War he was commissioned to teach dowsing skills to US Marines so that they could avoid booby traps, navigate safely through jungles and learn the whereabouts of the enemy. Soldiers reported that using the L-rod in this way saved many lives. Louis is particularly interested in the challenge of the search. Using his dowsing together with a range of scientific devices he has located lost pipes, oil, wells, caves and buried treasures.”

http://www.americaninsurancedepot.com/help/dowsing.htm

“The New York Times reported that the U.S. Marine Corps used dowsing in Vietnam (Baldwin, 1967)”

http://www.tricksterbook.com/ArticlesOnline/Dowsing.htm

“By Cosmos. Comment posted 07-Feb-2006 @05:14pm:”I’ve seen it work in Viet Nam to locate enemy tunnels. We would use copper L shaped rods and when we walked over a tunnel the rods would cross. We would dig down and always find them.
I also witnessed a wooden divining rod find water in Viet Nam – in the highlands where it was not always so easy to find. In this case the “diviner” was a “Sea Bee” and he walked around with this stick and when he got to a certain spot the stick twisted so much in his hands the bark split off. I thought he was twisting the stick himself so I asked him if I could try it and sure enough I could feel it twisting also. He put a stake in the ground where he wanted to drilling rig to drill and left the area. When he came back he found the engineers had started drilling about 5 feet from his stake. After drilling over 200 feet down they didn’t hit water. The Sea Bee then ordered them to drill where his stake was and they hit water at 75 feet.”

http://j-walkblog.com/index.php?/weblog/comments/dowsing/

“During the Viet Nam conflict ( War for lack of a better term) We used dowsers to locate enemy tunnel systems and weapons cache’s. Here our military brought in teams of dowsers, not to simply locate these materials, but to teach the skill to others. Then came the job nobody wanted, the “Tunnel Rat”. The poor bastard that armed with a side arm and a satchel charge of c-4; would enter these underground labyrinths to seek and destroy. Not a bad job till you find out that most had to be done by complete darkness in the tunnel in case there was a guard on duty. If that weren’t bad enough, our little buddies sometimes left behind a few small pit vipers. Yes no one except for the few volunteered for this job!”

http://www.geocities.com/dowser.geo/dowse.html

“Armed Forces (dowsing used by the British Army since Colonial times); dowsing appeared in USSR army manuals in 1930 for the finding of water in remote areas; dowsing used by the First and Third US Marine Divisions in Vietnam, 1967, as a simple, low-cost method for locating Vietcong tunnels, which were used for communication, storage depots, supply network, command posts, training centres, hospitals and sally ports for over twenty years (Bossart 1968 in the Project Poorboy Annual Progress Report; Bird 1979, Chapter 11)).”

http://www-sop.inria.fr/agos/sis/dowsing/dowsdean.html

Robert A. Swanson is author of “The Miracle of Dowsing: How This Dowser Found the Ace of Spades Saddam”. [I found this one interesting… whether it is true or not…I’ll leave that up to you! – bfg]

http://www.ebookmall.com/ebook/186849-ebook.htm

Bolivianos apelam ao diabo contra montanha ‘comedora de gente’ (BBC)

4 outubro 2014

Foto: Catharina Moh/BBC

Minas de Cerro Rico são fonte de riqueza e temor para moradores da região (Foto: Catharina Moh/BBC)

As minas da montanha de Cerro Rico, na Bolívia, têm cerca de 500 anos de idade e delas saiu a prata que gerou riquezas ao antigo império espanhol.

Mas, agora, a região está cheia de túneis e perigos, o que transforma a montanha em uma armadilha para homens e meninos que trabalham no local.

Tanto que a população chega a apelar até para o diabo, rogando por segurança: a superstição fez com que os trabalhadores colocassem imagens de uma criatura com chifres nos túneis.

Marco, 15 anos, um dos moradores da região, trabalha em um destes túneis perigosos, coberto de suor e poeira. Ele carrega rochas em um carrinho de mão – algo que repete entre 35 e 40 vezes durante seu turno de cinco horas de trabalho, frequentemente à noite, depois de passar o dia na escola.

A mãe de Marco e se mudou para Cerro Rico com os quatro filhos, depois que o pai foi embora. Eles vivem na entrada de um dos túneis, sem água corrente e usando uma mina abandonada como banheiro.

“Quero ser uma pessoa melhor, não trabalhar na mina… Gostaria de me formar, ser advogado”, diz Marco, cuja família depende de seu salário.

Na era colonial espanhola, a montanha produziu toneladas e mais toneladas de prata. Durante o mesmo período, estima-se que 8 milhões de pessoas tenham morrido no local, o que deu a Cerro Rico o apelido de Montanha que Devora Homens.

Hoje cerca de 15 mil mineiros trabalham na montanha, e uma associação local informa que 14 mulheres da região ficam viúvas a cada mês. A expectativa de vida é de 40 anos em média.

Acidentes

Foto: Catharina Moh/BBCMarco trabalha na mina há um ano (Foto: Catharina Moh/BBC)

Como todos os que trabalham na montanha, Marco teme os acidentes e também a silicose, uma doença causada pela inalação de poeira. Marco conta que o cunhado morreu antes dos 30 anos devido à doença.

“Você come a poeira, vai para seus pulmões e te ataca”, disse Olga, mãe solteira que guarda os equipamentos para os mineiros.

Os filhos de Olga, Luis, 14 anos, e Carlos, 15, trabalham levando os carrinhos de mão, como Marco. Às vezes eles começam a trabalhar às 2h da madrugada para completar o turno de oito horas antes de ir para a escola.

Eles também enfrentam outro perigo da montanha – o gás tóxico liberado nas rochas.

“Os pés ficam fracos e você tem dor de cabeça. O gás é o que fica depois que a dinamite explode”, explicou Carlos.

Uma mulher contou que o marido respirou o gás, ficou tonto e caiu em um poço da mina, onde morreu.

O grande número de mortes acaba gerando superstições.

Os homens e meninos mastigam folhas de coca, afirmando que isso ajuda a filtrar a poeira. Eles também fazem oferendas de folhas de coca junto com bebida alcoólica e cigarros para El Tio, o deus-demônio das minas.

Cada uma das 38 empresas que gerenciam as minas na montanham tem uma estátua do El Tio em seus túneis.

Foto: Catharina Moh/BBCTúneis contam com estátuas de El Tio, que recebem oferendas (Foto: Catharina Moh/BBC)

“Ele tem chifres porque é o deus das profundezas”, disse Grover, chefe de Marco. “Geralmente nos reunimos aqui às sexta-feiras para fazer as oferendas, agradecendo por ele ter nos dado muitos minerais, e também para pedir proteção dele contra acidentes.”

“Fora da mina, somos católicos, quando entramos, adoramos o diabo”, disse.

Mais crianças

Marco e Luis não são os mais jovens trabalhando nas minas.

Foto: Catharina Moh/BBCLuis masca folhas de coca antes de começar a trabalhar (Foto: Catharina Moh/BBC)

“Há dez crianças que vejo (trabalhando). Quando elas vêm aqui, têm bolhas nas mãos, então acho que estão dentro das minas. Crianças de oito, nove, dez anos..”, disse Nicolas Marin Martinez, diretor da única escola da montanha, mantida por uma organização de caridade suíça.

Uma mudança recente na lei da Bolívia permite que crianças de dez anos trabalhem legalmente, mas não nas minas, consideradas perigosas demais.

No entanto, um relatório do ombudsman do governo da Bolívia estima que 145 crianças trabalham nas minas. Outra estimativa sugere que o número de crianças trabalhando na montanha possa chegar a 400.

Apesar de tudo isso, o FMI afirma que a Bolívia reduziu seus níveis de pobreza e quase triplicou a renda per capita desde que o presidente Evo Morales assumiu o cargo, em 2005.

No dia 12 de outubro, Morales tentará ser eleito para o terceiro mandato, prometendo devolver aos pobres as riquezas da terra.

Para os que vivem em Cerro Rico, os benefícios do governo de Morales parecem ainda não ter chegado.

Can Big Data Tell Us What Clinical Trials Don’t? (New York Times)

CreditIllustration by Christopher Brand

When a helicopter rushed a 13-year-old girl showing symptoms suggestive of kidney failure to Stanford’s Packard Children’s Hospital, Jennifer Frankovich was the rheumatologist on call. She and a team of other doctors quickly diagnosed lupus, an autoimmune disease. But as they hurried to treat the girl, Frankovich thought that something about the patient’s particular combination of lupus symptoms — kidney problems, inflamed pancreas and blood vessels — rang a bell. In the past, she’d seen lupus patients with these symptoms develop life-threatening blood clots. Her colleagues in other specialties didn’t think there was cause to give the girl anti-clotting drugs, so Frankovich deferred to them. But she retained her suspicions. “I could not forget these cases,” she says.

Back in her office, she found that the scientific literature had no studies on patients like this to guide her. So she did something unusual: She searched a database of all the lupus patients the hospital had seen over the previous five years, singling out those whose symptoms matched her patient’s, and ran an analysis to see whether they had developed blood clots. “I did some very simple statistics and brought the data to everybody that I had met with that morning,” she says. The change in attitude was striking. “It was very clear, based on the database, that she could be at an increased risk for a clot.”

The girl was given the drug, and she did not develop a clot. “At the end of the day, we don’t know whether it was the right decision,” says Chris Longhurst, a pediatrician and the chief medical information officer at Stanford Children’s Health, who is a colleague of Frankovich’s. But they felt that it was the best they could do with the limited information they had.

A large, costly and time-consuming clinical trial with proper controls might someday prove Frankovich’s hypothesis correct. But large, costly and time-consuming clinical trials are rarely carried out for uncommon complications of this sort. In the absence of such focused research, doctors and scientists are increasingly dipping into enormous troves of data that already exist — namely the aggregated medical records of thousands or even millions of patients to uncover patterns that might help steer care.

The Tatonetti Laboratory at Columbia University is a nexus in this search for signal in the noise. There, Nicholas Tatonetti, an assistant professor of biomedical informatics — an interdisciplinary field that combines computer science and medicine — develops algorithms to trawl medical databases and turn up correlations. For his doctoral thesis, he mined the F.D.A.’s records of adverse drug reactions to identify pairs of medications that seemed to cause problems when taken together. He found an interaction between two very commonly prescribed drugs: The antidepressant paroxetine (marketed as Paxil) and the cholesterol-lowering medication pravastatin were connected to higher blood-sugar levels. Taken individually, the drugs didn’t affect glucose levels. But taken together, the side-effect was impossible to ignore. “Nobody had ever thought to look for it,” Tatonetti says, “and so nobody had ever found it.”

The potential for this practice extends far beyond drug interactions. In the past, researchers noticed that being born in certain months or seasons appears to be linked to a higher risk of some diseases. In the Northern Hemisphere, people with multiple sclerosis tend to be born in the spring, while in the Southern Hemisphere they tend to be born in November; people with schizophrenia tend to have been born during the winter. There are numerous correlations like this, and the reasons for them are still foggy — a problem Tatonetti and a graduate assistant, Mary Boland, hope to solve by parsing the data on a vast array of outside factors. Tatonetti describes it as a quest to figure out “how these diseases could be dependent on birth month in a way that’s not just astrology.” Other researchers think data-mining might also be particularly beneficial for cancer patients, because so few types of cancer are represented in clinical trials.

As with so much network-enabled data-tinkering, this research is freighted with serious privacy concerns. If these analyses are considered part of treatment, hospitals may allow them on the grounds of doing what is best for a patient. But if they are considered medical research, then everyone whose records are being used must give permission. In practice, the distinction can be fuzzy and often depends on the culture of the institution. After Frankovich wrote about her experience in The New England Journal of Medicine in 2011, her hospital warned her not to conduct such analyses again until a proper framework for using patient information was in place.

In the lab, ensuring that the data-mining conclusions hold water can also be tricky. By definition, a medical-records database contains information only on sick people who sought help, so it is inherently incomplete. Also, they lack the controls of a clinical study and are full of other confounding factors that might trip up unwary researchers. Daniel Rubin, a professor of bioinformatics at Stanford, also warns that there have been no studies of data-driven medicine to determine whether it leads to positive outcomes more often than not. Because historical evidence is of “inferior quality,” he says, it has the potential to lead care astray.

Yet despite the pitfalls, developing a “learning health system” — one that can incorporate lessons from its own activities in real time — remains tantalizing to researchers. Stefan Thurner, a professor of complexity studies at the Medical University of Vienna, and his researcher, Peter Klimek, are working with a database of millions of people’s health-insurance claims, building networks of relationships among diseases. As they fill in the network with known connections and new ones mined from the data, Thurner and Klimek hope to be able to predict the health of individuals or of a population over time. On the clinical side, Longhurst has been advocating for a button in electronic medical-record software that would allow doctors to run automated searches for patients like theirs when no other sources of information are available.

With time, and with some crucial refinements, this kind of medicine may eventually become mainstream. Frankovich recalls a conversation with an older colleague. “She told me, ‘Research this decade benefits the next decade,’ ” Frankovich says. “That was how it was. But I feel like it doesn’t have to be that way anymore.”

Racionamento de água já atinge 2,77 milhões de pessoas em São Paulo (Folha de S.Paulo)

JOÃO ALBERTO PEDRINI

CAMILA TURTELLI
DE RIBEIRÃO PRETO
WILLIAM CARDOSO
DO AGORA

04/10/2014 02h00

Apesar das chuvas que atingiram algumas regiões do Estado nos últimos dias, o racionamento oficial de água já atinge 2,77 milhões de pessoas em 25 municípios.

O número de habitantes afetados é 32% maior do que em agosto, quando levantamento da Folha mostrou que 2,1 milhões viviam sob rodízio em 18 cidades.

O racionamento oficial ocorre em municípios onde os serviços de abastecimento de água e tratamento de esgoto são de responsabilidade das prefeituras.

Não há na lista cidades em que o sistema é gerenciado pela Sabesp, embora a empresa venda água para algumas delas.

O problema atinge cidades que captam água de rios, lagoas, represas, córregos, reservatórios e poços subterrâneos. Nenhuma delas tem prazo para o fim do rodízio.

Alex Argozino/Editoria de Arte/Folhapress

Segundo a Defesa Civil, as chuvas registradas de janeiro a setembro no Estado foram 21,3% menores do que a média histórica. Foram 12.972 mm ante média de 17.174 mm.

Em Valinhos (a 85 km de São Paulo) a prefeitura admite que o racionamento deve seguir por mais um ano. Duas vezes por semana, a cidade fica 18 horas sem água.

Metade da água é do sistema Cantareira, 5% de poços e 45% de represas, que estão com níveis baixos.

A prefeitura diz que investirá na ampliação do tratamento, hoje no limite. Com o investimento, de cerca de R$ 3 milhões, será possível captar mais água do rio Atibaia.

Cravinhos (a 292 km de São Paulo), assim como Uchoa (a 416 km de São Paulo), só capta de poços e vive a mesma situação.

“Todo ano, em períodos secos, percebemos alta no consumo, mas nunca precisamos racionar. É a primeira vez”, disse Claudio Henrique Alves Cairo, superintendente do setor de água e esgoto.

O maior município com racionamento no Estado é Guarulhos (a 16 km de São Paulo). Lá, 13% da água é captada por meio de produção própria e 87% são comprados da Sabesp.

A prefeitura diz que estuda ampliar a captação em novos mananciais.

Em Mauá (a 27 km de São Paulo), que também compra água da Sabesp, o racionamento começou na última quarta. O município, diz que o fornecimento caiu 22% desde julho. Moradores dizem que sofrem com o problema há três meses.

Oficialmente, a cidade foi dividida em cinco partes e cada uma fica sem água durante um dia da semana, de segunda a sexta-feira.

O aposentado Luiz Carlos Lissoni, 56, já chegou a ficar quatro dias seguidos sem água e ontem estava com a torneira seca. Para minimizar os problemas, comprou uma caixa de 1.500 litros por R$ 410 e tinha outra de 1.000 litros.

“Tenho uma mulher doente, que precisa de mais de um banho por dia”, diz.

Fall in monsoon rains driven by rise in air pollution, study shows (Science Daily)

Date: October 1, 2014

Source: University of Edinburgh

Summary: Emissions produced by human activity have caused annual monsoon rainfall to decline over the past 50 years, a study suggests. In the second half of the 20th century, the levels of rain recorded during the Northern Hemisphere’s summer monsoon fell by as much as 10 per cent, researchers say. Changes to global rainfall patterns can have serious consequences for human health and agriculture.


Emissions produced by human activity have caused annual monsoon rainfall to decline over the past 50 years, a study suggests.

In the second half of the 20th century, the levels of rain recorded during the Northern Hemisphere’s summer monsoon fell by as much as 10 per cent, researchers say. Changes to global rainfall patterns can have serious consequences for human health and agriculture.

Scientists found that emissions of tiny air particles from human-made sources — known as anthropogenic aerosols — were the cause. High levels of aerosols in the atmosphere cause heat from the sun to be reflected back into space, lowering temperatures on Earth’s surface and reducing rainfall.

Levels of aerosol emissions have soared since the 1950s, with the most common sources being power stations and cars.

Researchers at the University of Edinburgh say their work provides clear evidence of human-induced rainfall change. Alterations to summer monsoon rainfall affect the lives of billions of people, mostly those living in India, South East Asia and parts of Africa.

The team calculated the average summer monsoon rainfall in the Northern Hemisphere between 1951 and 2005. They used computer-based climate models to quantify the impact of increasing aerosol emissions and greenhouse gases over the same period. They also took account of natural factors such as volcanic eruptions and climate variability to gauge the impact of human activity on the amount of monsoon rainfall.

Researchers say levels of human-made aerosols are expected to decline during the 21st century as countries begin adopting cleaner methods of power generation.

The study is published in the journal Geophysical Research Letters. The work was funded by the Natural Environmental Research Council, European Research Council and National Centre for Atmospheric Science.

Lead author Dr Debbie Polson, of the University of Edinburgh’s School of GeoSciences, said: “This study shows for the first time that the drying of the monsoon over the past 50 years cannot be explained by natural climate variability and that human activity has played a significant role in altering the seasonal monsoon rainfall on which billions of people depend.”

Journal Reference:

  1. D. Polson, M. Bollasina, G. C. Hegerl, L. J. Wilcox. Decreased monsoon precipitation in the Northern Hemisphere due to anthropogenic aerosols.Geophysical Research Letters, 2014; 41 (16): 6023 DOI: 10.1002/2014GL060811

Bactéria pode ter sistema imune rudimentar, indica estudo (Fapesp)

03 de outubro de 2014

Por Karina Toledo

Agência FAPESP – Um estudo publicado na revista Nature Communications revelou que a bactéria Salmonella enterica é capaz de produzir uma proteína muito semelhante à alfa-2-macroglobulina humana, que desempenha um papel-chave em nosso sistema imunológico.

A hipótese levantada pelos pesquisadores do Instituto de Biologia Estrutural (IBS) de Grenoble, na França, é de que também nas bactérias as macroglobulinas poderiam fazer parte de um sistema de defesa rudimentar. Se a teoria for confirmada por estudos futuros, essas proteínas podem se tornar alvos para o desenvolvimento de novos antibióticos.

“O mais fascinante é que as macroglobulinas são proteínas imensas, formadas por quase 1.700 resíduos de aminoácidos. Para a bactéria sintetizar uma molécula tão grande é porque ela deve ter um papel muito importante”, afirmou a brasileira Andréa Dessen, pesquisadora do IBS e coordenadora, no Laboratório Nacional de Biociência (LNBio), em Campinas, de um projeto apoiado pela FAPESP por meio do programa São Paulo Excellence Chairs (SPEC).

No organismo humano, a missão da alfa-2-macroglobulina é detectar e neutralizar proteases secretadas por microrganismos invasores, disse a pesquisadora. As proteases são enzimas que quebram as ligações entre os aminoácidos das proteínas.

“A macroglobulina impede, dessa forma, que as proteases dos invasores destruam os tecidos do organismo, o que permitiria a infecção de tecidos mais profundos”, explicou.

Além disso, a alfa-2-macroglobulina também se liga a proteases que participam do processo de coagulação sanguínea, evitando que proteínas importantes sejam destruídas indevidamente.

Em estudos anteriores, nos quais o genoma de diversas espécies de bactérias foi sequenciado, pesquisadores alemães já haviam observado a presença do gene da macroglobulina. No IBS, o grupo liderado por Dessen já havia feito a caracterização bioquímica da proteína produzida pelas espécies Escherichia coli e Pseudomonas aeruginosa.

“Agora, de maneira inédita, estudamos a estrutura tridimensional da macroglobulina secretada pela Salmonella enterica por uma técnica conhecida como cristalografia de raios X, que permite visualizar detalhes em nível atômico. E pudemos confirmar que, de fato, ela é muito parecida com a macroglobulina humana”, contou Dessen.

De acordo com a pesquisadora, a descoberta reforça a hipótese de que a alfa-2-macroglobulina tem o papel de proteger a bactéria das proteases secretadas por outras bactérias ou pelo organismo do hospedeiro que ela tenta infectar.

“Em um modelo de camundongo, pesquisadores canadenses mostraram que cepas da bactéria Pseudomonas aeruginosa que não produzem macroglobulina têm menor capacidade de causar doença, ou seja, são menos virulentas. A proteína parece dar uma vantagem à bactéria na hora de colonizar o hospedeiro, mas ainda não sabemos exatamente por quê”, disse.

Desdobramentos

Em um braço da pesquisa que está sendo conduzido no LNBio, com apoio da FAPESP e orientação de Dessen, a pós-doutoranda francesa Samira Zouhir investiga a estrutura da macroglobulina sintetizada por bactérias da espécie Pseudomonas aeruginosa – causadora de diversos casos de infecção hospitalar.

“Se conseguirmos desvendar a estrutura tridimensional da proteína, isso nos dará pistas sobre sua função no processo infeccioso”, disse Dessen.

Quando o papel das macroglobulinas estiver bem compreendido em diferentes espécies de bactérias, acrescentou, essas proteínas poderão se tornar alvo para o desenvolvimento de novos antibióticos.

“Também há pesquisas interessantes em modelo de camundongo mostrando que a aplicação de alfa-globulina humana pode oferecer proteção contra a sepse. Há várias possibilidades de tratamento a serem exploradas”, avaliou a pesquisadora.

O artigo Structure of a bacterial α2-macroglobulin reveals mimicry of eukaryotic innate immunity (doi: 10.1038/ncomms5917), pode ser lido em www.nature.com/ncomms/2014/140915/ncomms5917/full/ncomms5917.html.

Doing math with your body (Science Daily)

Date: October 2, 2014

Source: Radboud University

Summary: You do math in your head most of the time, but you can also teach your body how to do it. Researchers investigated how our brain processes and understands numbers and number size. They show that movements and sensory perception help us understand numbers.


In this example the physically largest number (2) is the smallest in terms of meaning. It was harder for test subjects to identify a 2 as the physically largest number then it was for them to identify a 9 as the largest number. Credit: Image courtesy of Radboud University

You do math in your head most of the time, but you can also teach your body how to do it. Florian Krause investigated how our brain processes and understands numbers and number size. He shows that movements and sensory perception help us understand numbers. Krause defends his thesis on October 10 at Radboud University.

When learning to do math, it helps to see that two marbles take up less space than twenty. Or to feel that a bag with ten apples weighs more than a bag with just one. During his PhD at Radboud University’s Donders Institute, Krause investigated which brain areas represent size and how these areas work together. He concludes that number size is associated with sizes experienced by our body.

Physically perceived size

Krause asked tests subjects to find the physically largest number in an image with eighteen numbers. Sometimes this number was also the largest in terms of meaning, but sometimes it wasn’t. Subjects found the largest number faster when it was also the largest in terms of meaning. ‘This shows how sensory information about small and large is associated with our understanding of numbers’, Krause says. ‘Combining this knowledge about size makes our processing of numbers more effective.’

More fruit, more force

Even very young children have a sensory understanding of size. In a computer game, Krause asked them to lift up a platform carrying a few or many pieces of fruit by pressing a button. Although the amount of force applied to the button did not matter — simply pressing it was adequate — children pushed harder when there was a lot of fruit on the platform and less hard when there was little fruit on the platform.

Applications in education

Krause believes his results can provide applications in math education. ‘If numerical size and other body-related size information are indeed represented together in the brain, strengthening this link during education might be beneficial. For instance by using a ‘rekenstok’ which makes you experience how long a meter or ten centimeter is when holding it with both hands. This general idea can be extended to other experiencable magnitudes besides spatial length, by developing tools which make you see an amount of light or hear an amount of sound that correlates with the number size in a calculation.’

Battle between NSF and House science committee escalates: How did it get this bad? (Science)

HOUSE COMMITTEE ON SCIENCE, SPACE & TECHNOLOGY. Representatives Eddie Bernice Johnson (D–TX) and Lamar Smith (R–TX)

Four times this past summer, in a spare room on the top floor of the headquarters of the National Science Foundation (NSF) outside of Washington, D.C., two congressional staffers spent hours poring over material relating to 20 research projects that NSF has funded over the past decade. Each folder contained confidential information that included the initial application, reviewer comments on its merit, correspondence between program officers and principal investigators, and any other information that had helped NSF decide to fund the project.

The visits from the staffers, who work for the U.S. House of Representatives committee that oversees NSF, were an unprecedented—and some say bizarre—intrusion into the much admired process that NSF has used for more than 60 years to award research grants. Unlike the experts who have made that system work so well, however, the congressional staffers weren’t really there to judge the scientific merits of each proposal. But that wasn’t their intent.

The Republican aides were looking for anything that Representative Lamar Smith (R–TX), their boss as chair of the House Committee on Science, Space, and Technology, could use to support his ongoing campaign to demonstrate how the $7 billion research agency is “wasting” taxpayer dollars on frivolous or low-priority projects, particularly in the social sciences. The Democratic staffers wanted to make sure that their boss, Representative Eddie Bernice Johnson (D–TX), the panel’s senior Democrat, knew enough about each grant to rebut any criticism that Smith might levy against the research.

The peculiar exercise is part of a long-running and bitter battle that is pitting Smith and many of his panel’s Republican members against Johnson and the panel’s Democrats, NSF’s leadership, and the academic research community. There’s no end in sight: The visits are expected to continue into the fall, because NSF has acceded—after some resistance—to Smith’s request to make available information on an additional 30 awards. (Click here to see a spreadsheet of the requested grants.)

And the feud appears to be escalating. This week, Johnson wrote to Smith accusing him “of go[ing] after specific peer-reviewed grants simply because the Chairman personally does not believe them to be of high value.”  (Click here to see a PDF of Johnson’s letter and related correspondence from Smith and NSF.)

Smith, however, argues he is simply taking seriously Congress’s oversight responsibility. And he promises to stay the course: “Our efforts will continue until NSF agrees to only award grants that are in the national interest,” he wrote in a 2 October e-mail to ScienceInsider.

Ask, answered

How did things get to this point? For the past 18 months, Smith has waged a very public assault on NSF’s storied peer-review system. He’s issued a barrage of press releases that ridicule specific awards, championed legislation that would alter NSF’s peer-review system and slash funding for the social science programs that have supported much of the research he has questioned, and berated NSF officials for providing what he considers to be inadequate explanations of their funding decisions.

NSF has defended itself at congressional hearings, in personal meetings with committee staff and the chair, and with a stream of letters and e-mails. White House officials, university leaders, and Democratic legislators have joined the fray, roundly criticizing Smith for what they see as an attempt to impose his political judgment on a process that draws upon the wisdom of scientific experts. But that nearly universal condemnation hasn’t stopped Smith, who was first elected to Congress in 1986 and last year was named chairman of the science committee.

Smith describes his growing frustration with NSF in a 27 August letter to NSF Director France Córdova. (The committee made this and another letter available to ScienceInsider.) Smith notes that he first asked for materials relating to several grants in the spring of 2013, soon after Cora Marrett became acting NSF director following Subra Suresh’s resignation to become president of Carnegie Mellon University.

But after being rebuffed by Marrett, Smith writes that he “set aside the request … until a permanent NSF director was installed.” Córdova was confirmed by the Senate this past March, and on 7 April Smith wrote her a letter containing a list of 20 grants that he wanted to examine.

Smith’s request created a major dilemma for NSF. On the one hand, Córdova knew that Congress has the authority to obtain information as part of its job to oversee the actions of federal agencies, a right that federal courts have repeatedly upheld. On the other hand, NSF constantly assures scientists that every aspect of the peer-review process will remain confidential. (NSF’s website contains abstracts of projects it has funded, and the public can obtain a copy of a successful application. NSF does not share any information about, or even acknowledge the existence of, proposals that have been rejected.)

Smith wanted the material shipped to his offices on Capitol Hill. But Córdova made a counteroffer that the Texas legislator grudgingly accepted. First, the committee staff could see everything related to the grant except for the names of the reviewers, which would be redacted. Second, the material would remain at NSF headquarters in Arlington, Virginia. Third, the staff could take copious notes, but none of the information could be photocopied or otherwise reproduced.

Judy Gan, head of public and legislators affairs at NSF, says the arrangement “preserves the integrity of the merit review process.” Even so, NSF officials have sent letters to the president of each university with a grant on Smith’s hit list, hoping to reassure them that everything is under control. NSF had no choice but to comply with the committee’s request, the letters explain. But NSF chose to tell each institution about the request “so that you may take appropriate action to inform your principal investigator and other potentially impacted parties about this production of documents.”

In many cases, NSF staffers had already sounded the alarm. Steven Folmar, a cultural anthropologist at Wake Forest University in Winston-Salem, North Carolina, recalls getting a call from his program manager last month alerting him to the science committee’s pending review of his 2012 grant, titled “Oppression and Mental Health in Nepal.” The 3-year, $160,000 award supported him and two colleagues in a study of how social status affects the mental health of Nepalese adolescents. Folmar has worked on and off in Nepal since 1979, and he says the country’s economic and cultural divisions are so striking that it’s an ideal place to measure the impact of discrimination on those in the lowest caste.

Folmar says that his first reaction after hearing that his grant had been singled out was to hunker down and keep quiet. “I felt like somebody in a war movie, with bullets whizzing over my head.” But after further reflection, he thinks that speaking up may not be such a bad idea.

“I’d tell [Smith] that our work has a great deal of relevance to this country,” he says. Measuring how social inequality can cause depression and anxiety is valuable information for U.S. public health officials, too, he explains, noting that some Nepalese victims display symptoms akin to post-traumatic stress disorder.

The project was a bargain, he adds. The grant covered several months of field work by three senior researchers and their graduate students, he notes, “all for about $50,000 a year. That’s pretty cheap science.”

Parsing the list

The scientific community is scratching its head over how Smith compiled his list of questionable grants. Many have also been flagged by other legislators, notably Senator Tom Coburn (R–OK), who issue annual lists of what they consider to be wasteful government spending. Research grants often appear on such lists. Decades ago, former Senator William Proxmire (D–WI) created what he called the Golden Fleece Awards to poke fun at such supposed boondoggles. In fact, the practice has become so widespread that 3 years ago a coalition of scientific organizations created a counterpoint, called the Golden Goose Awards, which honors federally funded basic research that later turned out to have huge societal benefits.

But Proxmire’s awards were never meant to fundamentally alter NSF’s peer-review system, according to Folmar. “This sounds like Golden Fleece with a much more dangerous twist,” he says.

Smith so far has asked to take a look at 50 grants. (Note: ScienceInsider was able to identify just 47 unique awards.) And the list is hard to characterize. One grant goes back to 2005, and 13 appear to have expired. The total amount of money awarded is about $26 million. The smallest grant, awarded in 2005, is $19,684 for a doctoral dissertation on “culture, change & chronic stress in lowland Bolivia.” The largest, for $5.65 million, is for a project that aims to use innovative education methods to educate Arctic communities about climate change and related issues.

More than half of the grants appear to involve work outside the United States. The largest number—29—were funded through NSF’s social, behavioral, and economic (SBE) sciences directorate. Of those, 21 came from SBE’s behavioral and cognitive sciences division, including a number of grants in archaeology and anthropology. But six of NSF’s seven directorates also funded grants on Smith’s hit list.

What the science committee expects to learn from its investigation is a burning question from scientists. A committee representative declined to answer repeated queries about the criteria used to select the grants. In his written statement to ScienceInsider, Smith said only that “there are many grants that no taxpayer would consider in the national interest, or worthy of how their hard-earned dollars should be spent. … The public deserves an explanation for why the NSF has spent hundreds of thousands of dollars on musicals about climate change, bicycle designs, and a video game that allows users to relive prom night.”

Mont Hubbard is the “bicycle designs” grantee on Smith’s intended list of shame. An emeritus professor of mechanical engineering at the University of California, Davis, Hubbard received $300,000 in 2009 to study the feedback system that allows humans to control a vehicle, in this case a bicycle. And Hubbard has a ready answer to Smith’s question about how his research could possibly serve the national interest.

“It’s easy to learn to ride a bicycle, but it’s hard to explain how we do it,” Hubbard says. His broader research into operator control of mechanical systems has applications across many areas, he explains. Substitute “pilot” for “rider” and “airplane” for “bicycle,” he says, and it’s clear that helping humans do a better job of manipulating machines has the potential to greatly improve performance, reduce safety risks, and promote economic growth.

Present stalemate

What’s next? So far, neither side has shown any signs of backing down. In his 27 August letter to Córdova, Smith declares that “the current review work is 5% complete, which implies that this oversight initiative will span at least 12 months.” He accuses her of reneging on a promise to provide the committee with everything it requested and speculates that she “may be banking on a cumbersome, time-consuming federal court process” to back her up. That approach puts NSF “in an indefensible position,” he says, predicting that such tactics will ultimately fail and that NSF will be forced to give in to his demands.

In her reply 2 weeks later, Córdova denies withholding any pertinent information. “To the contrary,” she writes, “NSF has provided the Committee full and complete access to our files for each of the grants of interest.” She disagrees with his assertion that “NSF does not trust the Committee.” But she acknowledges that “we are balancing this access with the need to preserve the trust of the scientific community, whose participation in the merit review process occurs in a confidential environment.”

With such strong rhetoric on both sides, it’s hard to see a quick or quiet ending to this confrontation. Johnson certainly seems prepared to continue defending NSF and, in particular, its funding of the social sciences. “This campaign against NSF’s merit-review system is indefensible absent some compelling explanation of what you are trying to accomplish,” she tells Smith in her 30 September letter. “If your ultimate goal is to cut funding for social and behavioral sciences …I respect your right to try to make that case as Chairman. But please do not compromise the integrity of NSF’s merit review system as part of this campaign.”

WIth reporting by David Shultz.

Correction 3 October, 8:05 a.m.: Steven Folmar studies how social inequality can contribute to, not treat, depression.

Futures of the Past – The Appendix

Futures of the Past

“Futures of the Past” is an issue about how past generations have reckoned their collective futures. But it’s also about how the razor’s edge of the present comes up against the haziness of futurity, and what happens when that hazy future becomes inscribed, remembered, and—eventually—forgotten. We’re interested here in the work that the future does in shaping history—as a utopian dream, a set of collective anxieties, or simply as a story that we tell about where we come from and where we hope to end up.


Chapter 1: Bad Predictions


Chapter 2: Futures Past


Chapter 3: The Politics of the Future

The History of Pain (The Appendix)

Banner_pain

How should we write the history of that most fundamental but subjective characteristic of sentience: pain?

The History of Pain

I recently came across a fascinating article in The Appendix by Ph.D candidate Lindsay Keiter entitled Interpreting “Physick”: The Familiar and Foreign Eighteenth-Century Body. Law professor Frank Pasquale excerpted it thusly:

Because I am an historian of pain, this excerpt naturally piqued my interest, and I went to examine the entire article. Now, I must confess straight off that I study 19th and early 20th century America. But I spend an awful lot of time interloping in early modern and medieval studies of pain, in part because my work addresses changing ideas of pain in the 19th century. If you really want to understand how ideas about pain change in the modern era, you need to know at least something about the ideas that preceded them.

I was, I confess, quite skeptical about the excerpt, but I wanted to read the article from start to finish. Here is what Ms. Keiter has to say about pain:

Most visitors are mildly alarmed to learn that there was nothing available for mild, systemic pain relief in the eighteenth century. You’d have to come back next century for aspirin. Potent pain management was available via opium latex, often mixed with wine and brandy to make laudanum. In the eighteenth century, small amounts were used as a narcotic, a sedative, a cough suppressant, or to stop up the bowels, but not for headaches.

This is (appropriately) carefully qualified, but even so, I do not think it is quite right. I think there are two points that are really important to clarify when thinking about the use of medicinal therapies for the relief of pain.

First, it has long been argued that professional healers at least as far back as the Middle Ages generally were not focused on alleviating their patients’ pain. Of course, then, as now, pain is a multivalent, rich, and highly ambiguous phenomenon, one that lends easily to metaphor and account in a wide variety of social domain. So, as Esther Cohen shows, most discussions of pain in Western medieval culture tend to appear in theological contexts, whereas early modern and modern expressions of pain often appear more in literary formats. It is actually surprisingly difficult to find people discussing their own phenomenologies of pain specifically in therapeutic contexts.

wheelie

A mid-17th century depiction of a quack doctor and his assistants performing public surgery on an unfortunate young man. Various medicinal liquors and unguents are on display to his right, and a recently treated man is hastily downing a post-op beer while being wheeled away from the scene.Jan Steen, “The Quack Doctor,” c. 1660, Rijksmuseum Amsterdam.

But both medievalists and early modernists have set about revising or at least complicating some aspects of the long-held belief that analgesia was not a major priority. Cohen shows beyond doubt in late medieval culture that both lay sufferers and healers focused much on pain, and that there is ample evidence from which to conclude that healers believed in the importance of and strove, where possible, to alleviate their charges’ pain. She notes:

Surcease might not have been the primary goal of physicians, who often considered pain an ancillary phenomenon, but in the end, the recommended cure was meant to also bring freedom from pain. It is important to remember that the great majority of the suffering sick agreed with this point of view. People turned to saints, physicians, or simple healers to have their pain eases, not increased. No matter how vociferous the literature in praise of pain is, it cannot silence the evidence for the basic human search for painlessness.

The evidence, as I understand it, suggests that a medieval emphasis on the redemptive qualities of pain and the difficulties in ameliorating it existed simultaneously along with a fairly intense and significant focus on the need to alleviate it.

On Twitter, historian of medicine Samantha Sandassie noted that one can find many recipes for analgesic remedies in early modern casebooks and treatises:

Daniel Goldberg @prof_goldberg .@FrankPasquale @ArsScripta I’m a modern historian of pain, not EM, but this strikes me as not quite right. Relief of pain was major [1]

Samantha Sandassie @medhistorian.@FrankPasquale @ArsScripta agreeing with @prof_goldberg on this one; surgical casebooks & treatises contain fair bit of pain mngment info.

In a follow-up email, Ms. Sandassie suggests examining primary sources such as The Diary of Elizabeth Freke and The Diary of the Rev. Ralph Josselin.

In early modern contexts, historians such as Lisa Smith and Hannah Newtonhave documented overt and in some contexts (the pain and suffering of children) even overwhelming medical and healing attention to experiences of pain and the need for its alleviation in illness scenarios.

So, I would want to suggest that we lack a lot of good evidence for the claim that even “mild” and “systemic” pain did not occupy the attention of healers in the West during the 18th c., and we have an increasing historiography suggesting that in fact such pain occupied a good deal of attention both in those who experienced it and in those from whom the pain sufferers sought relief.

The second point I want to make here is a larger claim regarding thinking about how well medicines may have “worked” in past contexts. And here I’d like to emphasize the flip side of Ms. Sandassie’s excellent point above: that while the past may not be incommensurable, it is nevertheless at times so very different from our contemporary world that presentism is an ever-present danger. The past, as L.P. Hartley famously observed, is a foreign country, and sometimes we benefit from treating it that way.

What does it mean for a remedy to “work”? And in answering this question as responsible historians, we cannot supply an answer that provides the criteria for what “works” in our contemporary contexts — and of course there are vibrant debates on exactly what it means for a medicine to “work” even among contemporaries. For example, does a medicine work for the relief of pain if it fails to surpass placebo in a relevant clinical trial? Given that placebos can be quite effective in relieving pain, at least temporarily, do we conclude from the failure that the medicine does not “work”?

In historical context, historians of medicine should IMO aim to answer this question by asking what it meant for people in the periods in which we are interested for a remedy to “work.” Although I am an historian of ideas rather than a social historian, I take the lessons of the New Social Turn seriously. If we really want to gain insight into the phenomenology of pain and illness in the past, we have to inquire as to the social meaning of medicines and remedies in their own contexts.

In his classic 1977 paper “The Therapeutic Revolution: Medicine, Meaning and Social Change in Nineteenth‑Century America,” Charles Rosenberg argued that remedies that “worked” in the early 19th century were those that had visible effects consistent with what one would expect and desire in a humoral system:

The American physician in 1800 had no diagnostic tools beyond his senses and it is hardly surprising that he would find congenial a framework of explanation which emphasized the importance of intake and outgo, of the significance of perspiration, of pulse, or urination and menstruation, of defecation, of the surface eruptions which might accompany fevers or other internal ills. These were phenomena which he as physician, the patient, and the patient’s family could see, evaluate, and scrutinize for clues as to the sick person’s fate.

But if diagnosis for both the physician, the illness sufferer, and the family depended in pertinent part on the visible* signs that signified morbid changes in humoral balance, one would predict that remedies which also operated on this semiotic basis would be so favored. Rosenberg states:

The effectiveness of the system hinged to a significant extent on the fact that all the weapons in the physician’s normal armamentarium worked — “worked,” that is, by providing visible and predictable physiological effects: purges purged, emetics induced vomiting, opium soothed pain and moderated diarrhea. Bleeding, too, seemed obviously to alter the body’s internal balance, as evidenced both by a changed pulse and the very quantity of blood drawn. Blisters and other purposefully induced local irritations certainly produced visible effects — and presumably internal consequences corresponding to their pain and location and to the nature and extent of the matter discharged.

This, then, is the point. It is not productive, in my view, to think about whether or not early 19th c. or 18th c. remedies for pain “worked” by applying present notions of efficacy. Such an approach does not make sense of how those who used and received the remedies for pain would have understood those remedies — it obfuscates both their own phenomenologies of pain and their own efforts and the efforts of their caregivers, intimates, and healers to alleviate that pain. What we would have to do to understand the extent to which 18th c. remedies for pain “worked” is to understand what it meant for such a remedy to work for people of that time.

Rosenberg, of course, emphasizes the importance of understanding “biological and social realities” of therapeutics in early 19th c. America. And in a follow-up Twitter exchange, Benjamin Breen was quick to point out (correctly, I think), that

Benjamin Breen @ResObscura @prof_goldberg @allenshotwell @medhistorian But on the other hand, the biological efficacy of drugs had a real historical role, no? I.e.
Daniel Goldberg @prof_goldberg @ResObscura @AllenShotwell @medhistorian *writing feverishly* — hope to have post up soon . . .
Benjamin Breen @ResObscura @prof_goldberg @allenshotwell @medhistorian cinchona RX for malaria was major event, precisely because it “worked” where others failed.

This is an important corrective. Acknowledging what anthropologists have termed the “social lives of medicines” is not equivalent to denying “biological reality.” However, I reject a neat distinction between biological action and cultural factors. This is not to deny the reality of the former, but to argue instead that the distinction is not particularly helpful in making sense of the history of medicine.

*   *   *

Interpreting “Physick”: The Familiar and Foreign Eighteenth-Century Body

Photograph of recreated apothecary shop

A recreated apothecary shop in Alexandria, Virginia, is representative of the style and organization of eighteenth-century shops.Wikimedia Commons

Aspirin. It’s inevitable.

When asked what medicine they’d want most if they lived in the eighteenth century, the visitors will struggle in silence for few moments. Then, someone will offer, “Aspirin?” Sometimes it’s delivered with a note of authority, on the mistaken notion that aspirin is derived from Native American remedies: “Aspirin.” I modulate my answer depending on the tone of the question. If the mood feels right, especially if the speaker answers with a note of condescension, I’ve been known to reply, “Do you know anyone who’s died of a mild headache?”

I work as an interpreter in the apothecary shop at the largest living history museum in the US.

If visitors leave with new knowledge, I’m satisfied. Another hundred people will remember the next time they make a meringue that cream of tartar is a laxative. But what I really want them to come away with is an idea. I want visitors to understand that our eighteenth-century forbears weren’t stupid. In the absence of key pieces of information—for examples, germ theory—they developed a model of the body, health, and healing that was fundamentally logical. Some treatments worked, and many didn’t, but there was a method to the apparent madness.

Engraving from Hohberg

This seventeenth-century engraving shows medicines being compounded and dispensed. Women were not licensed as apothecaries in the eighteenth century, but evidence suggests that in England, at least, they sometimes assisted husbands and fathers, despite not being licensed.Wolfgang Helmhard Hohberg’s Georgica Curiosa Aucta (1697) via Wellcome Images

5

Most visitors are mildly alarmed to learn that there was nothing available for mild, systemic pain relief in the eighteenth century. You’d have to come back next century for aspirin. Potent pain management was available via opium latex, often mixed with wine and brandy to make laudanum. In the eighteenth century, small amounts were used as a narcotic, a sedative, a cough suppressant, or to stop up the bowels, but not for headaches.

There were headache treatments, however. Colonial medical practitioners recognized multiple types of headaches based on the perceived cause, each with its own constellation of solutions. As is often the case, the simplest solutions were often effective. For a headache caused by sinus pressure, for example, the treatment was to induce sneezing with powered tobacco or pepper. Some good, hard sneezing would help expel mucus from the sinuses, thus relieving the pressure. For “nervous headaches”—what we call stress or tension headaches—I uncork a small, clear bottle and invite visitors to sniff the contents and guess what the clear liquid inside could be.

With enough coaxing, someone will recognize it as lavender oil. While eighteenth-century sufferers rubbed it on their temples, those with jangling nerves today can simply smell it—we don’t understand the exact mechanism, but lavender oil has been shown to soothe the nervous system. As a final example, and to introduce the idea that the line between food and medicine was less distinct two hundred years ago, I explain the uses of coffee in treating migraines and the headaches induced after a “debauch of hard liquors.” Caffeine is still used to treat migraines because it helps constrict blood vessels in the head, which can reduce pressure on the brain.

But if your biggest medical concern in the eighteenth century was a headache, you were lucky. Eighteenth-century medical practitioners faced menaces like cholera, dysentery, measles, mumps, rubella, smallpox, syphilis, typhus, typhoid, tuberculosis, and yellow fever. Here are a few.


Malaria

photograph of cinchona bark

The bark of the cinchona tree, called Peruvian bark or Jesuits bark, was an important addition to the European pharmacopeia.Wikimedia Commons

In discussing larger threats, I generally choose to focus on an illness that many visitors have heard of before, and for which a treatment was available. The “intermittent fever” also gives visitors a glimpse of one of the difficulties of studying the history of medicine–vague and often multiple names for a single condition. Intermittent fever was called such because of a particular symptom that made it easier to identify among a host of other fevers–sufferers experienced not only the usual fever, chills, and fatigue, but also paroxysms: cycles of intense chills followed by fever and sweating. Severe cases could result in anemia, jaundice, convulsions, and death.

10

After describing the symptoms to guests, I mention that the disease tended to afflict those living in swampy, hot, low-lying areas—such as Williamsburg. Older visitors often put it together—intermittent fever is what we call malaria. And typically, they know the original treatment for malaria was quinine.

It’s one of the times I can say, “We have that!” rather than, “Give us another hundred years.” I turn to the rows of bottles on the shelf behind me—not the eighteenth-century original apothecary jars that line the walls, but a little army of glass bottles, corked and capped with leather. The one I’m looking for is easy to find—a deep red liquid in a clear glass bottle. As I set it on the front counter, I introduce the contents: “Tincture of Peruvian bark.” I tend to add, “This is something I would have in my eighteenth-century medical cabinet.” Walking to the rear wall, I pull open a drawer and remove a wooden container. I lift the lid to reveal chunks of an unremarkable-looking bark. I explain that the bark comes from the cinchona tree, and, as undistinguished as it looks, it was one of the major medical advances of the seventeenth century.

Also called Jesuits’ bark, cinchona was used as a fever-reducer by native peoples in South America before being exported by the Jesuits to Europe. Its efficacy in fighting fevers soon made it a staple in English medical practice. While eighteenth-century apothecaries were ignorant of quinine, which would not be isolated and named until the 1810s, they were nonetheless prescribing it effectively.

The rings and dark dots are the result of infection by Plasmodium falciparum, one of the strains of protozoa that cause malaria.Wikimedia Commons

I make a point of explaining to visitors that quinine does not act like modern antibiotics do in killing off infections directly. Malaria is neither bacterial nor viral, but protozoan. Quinine (and more modern drugs derived from it and increasingly from Chinese Artemisia) interrupts the reproductive cycle of the malaria protozoa, halting the waves of offspring that burst forth from infected red blood cells. The protozoa, now rendered impotent, hole up in the sufferer’s liver, often swarming forth later in life in another breeding bid. So technically, once infected, you’ll always have malaria, but can suppress the symptoms.

Peruvian bark was used to treat a wide range of fevers, but it was not the only treatment. In certain instances of fever, it was used in conjunction with bloodletting. Bloodletting is a practice I’m always eager to explain, because it is so revealing of just how much our understanding of the body has changed in two centuries. Plus it freaks people out.


Fevers: A Note on Phlebotomy

15

Bloodletting or phlebotomy, dates back to antiquity. In the humoral theory of the body promulgated by Greco-Roman physicians, removing blood promoted health by balancing the humors, or fluids, of the body. This theory prevailed from roughly the fourth through the seventeenth centuries. Medical theorists gradually adopted a more mechanical understanding of the body, inspired by a renewed interest in anatomy and by experiments that explored the behavior of fluids and gases. These new theories provided an updated justification for bloodletting in particular cases.

Illustration of breathing a vein

“Breathing a Vein” by James Gilray, 1804.Wikimedia Commons

Whereas bloodletting had been a very widely applied treatment in ancient times, eighteenth-century apothecaries and physicians recommended it in more limited cases. In terms of fevers, it was to be applied only in inflammatory cases, which were associated with the blood, rather than putrid or bilious fevers, which were digestive. In Domestic Medicine, a popular late-eighteenth century home medical guide, physician William Buchan warned that “In most low, nervous, and putrid fevers … bleeding really is harmful … . Bleeding is an excellent medicine when necessary, but should never be wantonly performed.”

Eighteenth-century medical professionals believe that acute overheating often brought on inflammatory fevers. Key symptoms of inflammatory fevers were redness, swelling, pain, heat, and a fast, full pulse. Anything that promoted a rapid change in temperature, such as overexertion or unusually spicy food, could set off a chain reaction that resulted in inflammation. Drawing on mechanical theories of the body and renewed attention to the behavior of fluids, doctors complicated simple humoral explanations of disease. Blood, as a liquid, was presumed to behave as other liquids did. When heated, liquids move more rapidly; within the closed system of the human body, overheated blood coursed too rapidly through the body, generating friction. This friction in turn generated more heat and suppressed the expression of perspiration and urine, which compromised the body’s natural means for expelling illness. Removing a small quantity of blood, doctors reasoned, would relieve some of the pressure and friction in the circulatory system and allow the body to regulate itself back to health.

Picking up the lancet, I roll up my sleeve and gesture to the bend of my elbow, where blue veins are faintly visible through my skin. Generally, I explain, blood was let through these veins by venisection, where the lancet—a small pointed blade that folds neatly into its wooden handle like a tiny Swiss Army knife—is used to make a small incision below a fillet—a looped bandage tightened on the upper arm to control blood flow. The process is akin to blood donation today, except that the blood removed will be discarded. Apothecaries and physicians, striving to be systematic and scientific, often caught the escaping blood in a bleeding bowl—a handled dish engraved with lines indicating the volume of the contents in ounces. The volume of blood removed, Buchan cautioned, “must be in proportion to the strength of the patient and the violence of the disease.” Generally, a single bloodletting sufficed, but if symptoms persisted, repeated bloodlettings might be advised.

Visitors are generally incredulous that the procedure was fairly commonplace, and that people did it in their homes without the supervision of a medical professional. Bloodletting was sometimes recommended to promote menstruation or encourage the production of fresh blood. Both published medical writings and private papers suggest that folk traditions of bloodletting for a variety of reasons persisted throughout the eighteenth century.

20

Modern guests question both the safety and the efficacy of bloodletting. It terms of safety, it was generally a low-risk procedure; one function of bleeding was to push pathogens out of the body, thus limiting the risk of blood-borne infections. Routine bloodletting was typically limited to six or eight ounces of blood. By comparison, blood donors today give sixteen ounces. The human body is actually fairly resilient and can withstand substantial blood loss, so even in acute cases where blood was repeatedly let, exsanguination was unlikely to be the cause of death. One famous case visitors sometimes bring up is the death of George Washington in December 1799. While it is difficult to know the circumstances precisely, Dr. David Morens with the National Institutes of Health argues that the first President was afflicted with acute bacterial epiglottitis. The epiglottis is the small flap that prevents food from entering the airway or air from entering the stomach; when it becomes infected it swells, making eating, drinking, and breathing increasingly difficult, and eventually impossible. According to notes taken by the trio of physicians who treated Washington, he endured four bloodlettings in twelve hours, removing a total of 80 ounces of blood—the limit of what was survivable. This aggressive treatment presaged the “heroic” medicine of the nineteenth century and was far out of line with the recommendations of earlier physicians such as Buchan. Even so, Morens suspects that asphyxiation, not bloodletting, was cause of death.

Thus, while bloodletting probably caused few deaths, it also saved few lives. Aside from a possible placebo effect, bloodletting’s primary efficacy is in treating rare genetic blood disorders such as polycythemia (overproduction of red blood cells) and hemochromatosis (an iron overload disorder). So while the logic behind bloodletting seemed reasonable, it was due to the lack of a critical piece of information. “What actually caused most of the diseases doctors tried to treat with bloodletting?” I’ll ask. “Germs!” a visitor calls out. “Unfortunately,” I reply, “it will be another seventy-five years until the medical establishment accepts that we’re all covered in microscopic organisms that can kill us.”


The Common Cold

Most medical recommendations weren’t so seemingly bizarre, however. Eighteenth-century doctors strove to “assist Nature” in battling disease by recommending regimens—modifications of behavior, environment, and diet that were thought to promote recovery. Doctors and caretakers induced vomiting (an “upward purge”), defecation (a “downward purge”), urination, and/or sweating to help the body expel harmful substances and offered diets that were thought to help warm, cool, or strengthen the body. When visitors ask when the most commonly prescribed medicine is, we can’t give them a direct answer—the apothecaries kept track of debts and credits but not what was purchased—but we tell them the most common category of medicine we stocked was a laxative. Keeping one’s digestion regular was a priority in the eighteenth century.

Photo of Ebers Papyrus

The common cold has been with humanity for a very long time: it was described as far back as 1,550 BCE, in the Egyptian medical text known as the Ebers Papyrus.Wikimedia Commons

Visitors are often surprised to hear that they unwittingly follow regimens themselves, often for the same common ailments that laid low our colonial and revolutionary forbears. The example I typically use is the common cold, for which there is and never has been, alas, a cure. Looking to the row of children typically pressed up against the counter, I ask, “When you’re sick and can’t go to school, do you get more rest, or more exercise?” “Rest,” they answer in chorus. “And where do you rest?” “In bed.” “And what do you eat a lot of when you’re sick?” “Soup” and “juice” are the usual answers. “You’re behaving much as you would have two hundred and fifty years ago!” I tell them. “Doctors recommended resting someplace warm and dry and eating foods that were light and easy to digest—including broths and soups.”

Visitors are fascinated and often charmed to hear that the treatment of colds has essentially stayed the same. “When you take medicine for your cold,” I continue, “does it make you feel better or make your cold go away?” Most people are dumbfounded when they consider that the majority of medicine in the eighteenth century and today are to alleviate symptoms. Then as now, individuals and households selected treatments for stuffy noses, coughs, and fevers.


Surgery

25

While the treatment of disease has aspects both foreign and familiar, our distance from our forbears truly comes across in the comparatively primitive levels of surgery and midwifery. Because the squeamishness of guests varies widely, and because interpreters are discouraged from inducing nausea or fainting, we must proceed cautiously.

Surgery, visitors are shocked to hear, was not a prestigious profession until recently. In the eighteenth century, any physical undertaking for medical purposes was surgery—bandaging, brushing teeth, bloodletting. While separate in England, in the colonies apothecaries often took on surgical duties; low population density generally prevented specialization outside of large cities. In England, surgeons shared a guild (professional organization) with barbers, who pulled teeth and let blood as well as grooming and styling hair. A surgeon’s purview was more expansive—they set broken bones, amputated extremities when necessary, and removed surface tumors, requiring greater knowledge of anatomy.

Simple breaks could be set manually, as they are today. I often use myself as an example—I have an approximately fifteen-degree angle in my wrist from a bad fall several years ago. I explain that my wrist was set manually, with no pain management, very much as it might have been in the eighteenth century. (You know you’re a historian when that’s what you’re thinking on the gurney as a doctor jerks your bones back into alignment.)

engraving of splints

Before plaster casts were developed in the nineteenth century, broken bones could only be splinted. This engraving shows more elaborate splints for broken legs.Wellcome Images

Two factors limited the scope of surgical operations in the eighteenth century. The first was the lack of antisepsis; with no knowledge of germ theory and thus little control for infections surgeons avoided guts and kept operations as simple and efficient as possible. The second was pain.

A visitor always asks, “What did they do for pain?” Upon being told, “Nothing,” they blanch and then argue.

30

“What about opium?”

“Opium makes you vomit, and you’re restrained during operations and often on your back. You wouldn’t want to choke to death during your surgery.”

“They had to have done SOMETHING! A few shots of whiskey at least.”

While we can’t be sure what people did at home or while waiting for the doctor to arrive, doctors opposed the consumption of spirits before surgery because of increased bleeding. Occasionally, a visitor will ask if patients were given a thump on the head to make them lose consciousness.

“Well, the pain will probably wake you up anyhow,” I point out, “and now you have a head injury as well as an amputation to deal with.”

35

Generally, amputations lasted less then five minutes—minimizing the risk of infection and the chances of the patient going into shock from blood loss and pain. Limbs weren’t simply lopped off, however. Surgeons could tie off large blood vessels to reduce blood loss, and the surgical kit we display shows the specialized knives, saws, and muscle retractors employed by surgeons to make closed stumps around the severed bone.

Removing troublesome tumors was another challenge surgeons faced, commonly from breast cancer. This surprises some visitors, who tend to think of cancer as modern disease. I’ve even had a visitor insist that there couldn’t have been cancer two hundred years ago, when there were no chemical pesticides or preservatives. I informed him that cancer can also arise from naturally occurring mutations or malfunctions in cells—it even showed up in dinosaurs. Mastectomies have been performed for thousands of years. Because there was no means of targeting and controlling tumors, aggressive growth sometimes cause ulcerations through the skin, causing immense pain and generating a foul smell. Medicines such as tincture of myrrh were available to clean the ulcers and reduce the smell but did nothing to limit the cancer’s growth.

When ulceration made the pain unbearable or the tumor’s size interfered with everyday activities, sufferers resorted to surgery. Surgeons sought to remove the entire tumor, believing that if the cancer were not rooted out entirely, it would strike inward where they could not treat it. They were half right; cancer is prone to reappearing elsewhere in the body. Unfortunately, the removal of tumors triggers this—tumors secrete hormones that prevent the proliferation of cancer cells in other areas of the body. Removing tumors unleashes dormant cancer cells that have been distributed throughout the body. Without antisepsis and anesthesia, surgeons could not follow cancer inward.


Midwifery

Childbirth was one mystery partially penetrated in the eighteenth century. Prominent British physicians turned their attention to the anatomy and physiology of fetal development and conducted dissections—perhaps made possible by trade in freshly murdered cadavers in major British cities.

Smellie illustration

Illustration of fetal development in William Smellie’s Treatise on the Theory and Practice of Midwifery.Wellcome Images

William Smellie, a Scottish physician, produced some of the most accurate illustrations and descriptions of birth then available. Smellie’s Treatise on the Theory and Practice of Midwifery promoted the presence of male doctors into the traditionally sex-segregated birthing room. European medical schools began offering lecture series in midwifery leading to a certificate. The vast majority of women, especially in rural areas, continued to be delivered by traditional female midwives, but man-midwives were newly equipped to handle rare emergencies in obstructed delivery. Obstetrical forceps became more widely available over the course of the eighteenth century, though they were still cause for alarm; Smellie recommended that “operators” carry the disarticulated forcep blades in the side-pockets, arrange himself under a sheet, and only then “take out and dispose the blades on each side of the patient; by which means, he will often be able to deliver with the forceps, without their being perceived by the woman herself, or any other of the assistants.”

40

You can read more about Smellie’s inventions and early modern birthing devices in Brandy Shillace’s Appendix article “Mother Machine: An Uncanny Valley in the Eighteenth Century.”

In the shop, we rarely talk about the other equipment male doctors carried, for fear of upsetting visitors or creating controversy. Men continued to carry the “destructive instruments” long used to extract fetuses in order to save the mother. With forceps man-midwifery moved in the direction of delivery over dismemberment, but it remained an inescapable task before caesarean sections could be performed safely. Despite this avoidance, it periodically pops up, and forces me as the interpreter to rely on innuendo. One particularly discomfiting instance involved an eleven-year-old girl who asked about babies getting stuck during delivery. After explaining how forceps were used, she asked, “What if that didn’t work?” The best I could come up with was, “Then doctors had to get the baby out by any means necessary so the woman wouldn’t die.” She sensed my evasion and pressed on—“How did they do that?” Unwilling to explain how doctors used scissors and hooks in front of a group including children, I turned a despairing gaze on her mother. Fortunately, she sensed my panic and ushered her daughter outside; what explanation she offered, I do not know.

Engraving of instruments

Examples of some the “destructive instruments” man-midwives carried.Wellcome Images

Most women, fortunately, experienced uncomplicated deliveries and did not require the services of a man-midwife. Birth was not quite so fraught with peril as many visitors believe. While I’ve had one visitor inform me that all women died in childbirth, human reproduction generally works quite well. American colonists enjoyed a remarkably high birthrate. While there were regional variations, maternal mortality was probably about two percent—roughly ten times the maternal mortality rate in the United States (which lags significantly behind other developed countries). Repeated childbearing compounded these risks; approximately 1 in 12 women died as a result of childbearing over the course of their lives. Childbirth was a leading cause of death for women between puberty and menopause.

Improvements in antisepsis, prenatal care, fetal and maternal monitoring, and family planning over the past two centuries have pulled birth and death further apart. Fear of death altered how parents related to infants and children, how couples faced childbearing, and reproductive strategies. While this fear persists today, it is far more contained than it was two centuries ago.


44

Americans today live in a world of medical privilege unimaginable to their colonial forbears. It’s not because we are smarter or better than we were two hundred and fifty years ago. We are the beneficiaries of a series of innovations that have fundamentally altered how we conceptualize the body and reduced once-common threats. Guests in the Apothecary Shop today think of headaches as their most frequent medical problem because so many pressing diseases have been taken off the table.

45

From this privileged perspective, it’s all too easy to look down on those who believed in bloodletting or resorted to amputation for broken limbs. But the drive to do something to treat illness, to seek explanations for disease as a means of control, to strive to hold off death—these impulses haven’t changed.

As I often tell visitors—give it two hundred and fifty years, and we’ll look stupid too.

Chimp social network shows how new ideas catch on (New Scientist)

19:00 30 September 2014 by Catherine Brahic

Three years ago, an adult chimpanzee called Nick dipped a piece of moss into a watering hole in Uganda’s Budongo Forest. Watched by a female, Nambi, he lifted the moss to his mouth and squeezed the water out. Nambi copied him and, over the next six days, moss sponging began to spread through the community. A chimp trend was born.

Until that day in November 2011, chimps had only been seen to copy actions in controlled experimentsMovie Camera, and social learning had never been directly observed in the wild.

To prove that Nambi and the seven other chimps who started using moss sponges didn’t just come up with the idea independently, Catherine Hobaiter of the University of St Andrews, UK, and her colleagues used their own innovation: a statistical analysis of the community’s social network. They were able to track how moss-sponging spread and calculated that once a chimp had seen another use a moss sponge, it was 15 times more likely to do so itself.

A decade ago it was believed that only humans have the capacity to imitate, says Frans de Waal of Emory University in Atlanta, Georgia. “The present study is the first on apes to show by means of networking analysis that habits travel along paths of close relationships,” he says, adding that a similar idea was shown not long ago for humpback whale hunting techniques.

Caught in the act

Copying may seem like the easiest thing to us, but not all animals are able. Chimps at the Gombe Stream reserve in Tanzania can copy each other using twigs to fish out termites, but the baboons that watch them haven’t picked up the trick. “They don’t get it,” says Andrew Whiten of the University of St Andrews.

Whiten previously listed 39 behaviours that were found only in some communities of chimps, suggesting these were picked up from other group members rather than being innate behaviours. Since then, more have been added, but they still number in the dozens, not the thousands.

Given how rarely chimps pick up trends, it’s exciting that someone was on hand to watch it happen in this latest study, says Whiten.

Ultimately, he says, our ability to both invent and copy meant our ancestors could exploit a cognitive niche. “They began hunting large game by doing it the brainy way.” Imitation, it turns out, is not just the sincerest form of flattery, it’s also a smart thing to do.

Journal reference: PLoS Biology, DOI: 10.1371/journal.pbio.1001960

The cultural side of science communication (Northwestern University)

30-Sep-2014

Hilary Hurd Anyaso

New research explores how culture affects our conceptions of nature

EVANSTON, Ill. — Do we think of nature as something that we enjoy when we visit a national park and something we need to “preserve?” Or do we think of ourselves as a part of nature? A bird’s nest is a part of nature, but what about a house?

The answers to these questions reflect different cultural orientations. They are also reflected in our actions, our speech and in cultural artifacts.

A new Northwestern University study, in partnership with the University of Washington, the American Indian Center of Chicago and the Menominee tribe of Wisconsin, focuses on science communication and how that discipline necessarily involves language and other media-related artifacts such as illustrations. The challenge is to identify effective ways of communicating information to culturally diverse groups in a way that avoids cultural polarization, say the authors.

“We suggest that trying to present science in a culturally neutral way is like trying to paint a picture without taking a perspective,” said Douglas Medin, lead author of the study and professor of psychology in the Weinberg College of Arts and Sciences and the School of Education and Social Policy at Northwestern.

This research builds on the broader research on cultural differences in the understanding of and engagement with science.

“We argue that science communication — for example, words, photographs and illustrations — necessarily makes use of artifacts, both physical and conceptual, and these artifacts commonly reflect the cultural orientations and assumptions of their creators,” write the authors.

“These cultural artifacts both reflect and reinforce ways of seeing the world and are correlated with cultural differences in ways of thinking about nature. Therefore, science communication must pay attention to culture and the corresponding different ways of looking at the world.”

Medin said their previous work reveals that Native Americans traditionally see themselves as a part of nature and tend to focus on ecological relationships. In contrast, European-Americans tend to see humans as apart from nature and focus more on taxonomic relationships.

“We show that these cultural differences are also reflected in media, such as children’s picture books,” said Medin, who co-authored the study with Megan Bang of the University of Washington. “Books authored and illustrated by Native Americans are more likely to have illustrations of scenes that are close-up, and the text is more likely to mention the plants, trees and other geographic features and relationships that are present compared with popular children’s books not done by Native Americans.

“The European-American cultural assumption that humans are not part of ecosystems is readily apparent in illustrations,” he said.

The authors went to Google images and entered “ecosystems,” and 98 percent of the images did not have humans present. A fair number of the remaining 2 percent had children outside the ecosystem, observing it through a magnifying glass and saying, “I spy an ecosystem.”

“These results suggest that formal and informal science communications are not culturally neutral but rather embody particular cultural assumptions that exclude people from nature,” Medin said.

Medin and his research team have developed a series of “urban ecology” programs at the American Indian Center of Chicago, and these programs suggest that children can learn about the rest of nature in urban settings and come to see humans as active players in the world ecosystems.

Concea abre consulta pública para guia de uso de animais (MCTI)

Sociedade pode sugerir mudanças em propostas de manuais para pesquisa e ensino com primatas e estudos clínicos fora das instalações convencionais.

O Conselho Nacional de Controle de Experimentação Animal (Concea) abriu nesta quinta-feira (25), ao publicar  no Diário Oficial da União (DOU), uma consulta pública de 21 dias para dois capítulos do Guia Brasileiro de Produção e Utilização de Animais para Atividades de Ensino ou Pesquisa Científica.

Aprovado por etapas, o guia em elaboração contempla tópicos destinados a aves, cães, gatos, lagomorfos (como coelhos e lebres) e roedores, entre outros grupos taxonômicos.

Os capítulos sob consulta tratam de “primatas não humanos” e “estudos clínicos conduzidos a campo”. Sugestões de mudanças nos textos devem ser detalhadas e justificadas por meio do preenchimento de formulários disponíveis na página do conselho e, então, encaminhadas ao endereço eletrônico consultapubl.concea@mcti.gov.br.

“Essa participação da sociedade é importante porque o guia será a base para a definição dos requisitos necessários para a solicitação do licenciamento de atividades de pesquisa e ensino com animais, sem o qual o uso de determinada espécie não será permitido, conforme estabelecido na Lei Arouca”, destaca o coordenador do Concea, José Mauro Granjeiro.

Os dois capítulos devem incorporar considerações da sociedade antes da 26ª Reunião Ordinária do Concea, em 26 e 27 de novembro, quando a instância colegiada planeja apreciar o conteúdo e aprovar os documentos finais, a serem publicados no DOU. Nos meses seguintes, outros trechos do guia têm previsão de passar por consulta pública, abrangendo outros grupos taxonômicos como peixes, ruminantes, equinos, suínos, répteis e anfíbios.

Também nesta quinta, foi publicada uma lista com 17 métodos para substituir ou reduzir o uso de animais em testes toxicológicos. Divididos em sete grupos, as técnicas servem para medir o potencial de irritação e corrosão da pele e dos olhos, fototoxicidade, absorção e sensibilização cutânea, toxicidade aguda e genotoxicidade.

Primatas – Com 73 páginas, o capítulo acerca de primatas não humanos aborda a relevância desse conjunto de animais em análises sobre doenças virais e pesquisas biomédicas. O texto associa a “estreita relação filogenética com o homem” à utilização para estudos comparativos em enfermidades humanas.

O guia detalha requisitos mínimos para as instalações, da estrutura física dos alojamentos às áreas de criação e experimentação, passando por condições ambientais, além de procedimentos de manejo, como alimentação adequada, higienização de gaiolas e objetos, formas de contenção física, enriquecimento ambiental e medicina preventiva. Métodos experimentais, cuidados veterinários e princípios de bem-estar animal também compõem o capítulo sobre primatas.

“De uma forma geral, independentemente da finalidade da criação de primatas, o alojamento deve ser composto por um recinto complexo e estimulante, que promova a boa saúde e o bem-estar psicológico e que forneça plena oportunidade de interação social, exercício e manifestação a uma variedade de comportamentos e habilidades inerentes à espécie”, indica o texto. “O recinto satisfatório deve fornecer aos animais um espaço suficiente para que eles mantenham seus hábitos normais de locomoção e de comportamento”.

Estudos a campo – A intenção do outro documento sob consulta pública é orientar pesquisadores e definir requisitos mínimos necessários para a condução de “estudos clínicos conduzidos a campo” – aqueles realizados fora das instalações de uso animal –, quanto a aspectos éticos ligados ao manejo e ao bem-estar das espécies.

“Considerando que uma das missões do Concea é garantir que os animais utilizados em qualquer tipo de pesquisa científica tenham sua integridade e bem-estar preservados, a condução dos estudos fora dos ambientes controlados das instalações para utilização de animais em atividades de ensino ou pesquisa devem se adequar às regras aplicáveis”, afirma o guia.

Criado em 2008, o Concea é uma instância colegiada multidisciplinar de caráter normativo, consultivo, deliberativo e recursal. Dentre as suas competências destacam-se, além do credenciamento das instituições que desenvolvam atividades no setor, a formulação de normas relativas à utilização humanitária de animais com finalidade de ensino e pesquisa científica, bem como o estabelecimento de procedimentos para instalação e funcionamento de centros de criação, de biotérios e de laboratórios de experimentação animal.

(MCTI)